Jan 25 00:09:19 crc systemd[1]: Starting Kubernetes Kubelet... Jan 25 00:09:19 crc restorecon[4702]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 25 00:09:20 crc restorecon[4702]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 25 00:09:20 crc kubenswrapper[4947]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 25 00:09:20 crc kubenswrapper[4947]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 25 00:09:20 crc kubenswrapper[4947]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 25 00:09:20 crc kubenswrapper[4947]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 25 00:09:20 crc kubenswrapper[4947]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 25 00:09:20 crc kubenswrapper[4947]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.903982 4947 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906553 4947 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906576 4947 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906583 4947 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906588 4947 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906593 4947 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906598 4947 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906604 4947 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906611 4947 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906617 4947 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906623 4947 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906626 4947 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906630 4947 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906634 4947 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906640 4947 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906654 4947 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906659 4947 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906664 4947 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906668 4947 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906672 4947 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906677 4947 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906682 4947 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906686 4947 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906689 4947 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906693 4947 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906696 4947 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906700 4947 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906704 4947 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906708 4947 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906712 4947 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906717 4947 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906721 4947 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906726 4947 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906730 4947 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906735 4947 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906740 4947 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906745 4947 feature_gate.go:330] unrecognized feature gate: Example Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906750 4947 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906755 4947 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906760 4947 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906765 4947 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906770 4947 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906777 4947 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906781 4947 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906787 4947 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906793 4947 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906797 4947 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906802 4947 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906806 4947 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906811 4947 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906817 4947 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906822 4947 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906827 4947 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906831 4947 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906836 4947 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906840 4947 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906847 4947 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906853 4947 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906857 4947 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906863 4947 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906867 4947 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906872 4947 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906877 4947 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906882 4947 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906886 4947 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906893 4947 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906899 4947 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906903 4947 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906909 4947 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906913 4947 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906918 4947 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906922 4947 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907199 4947 flags.go:64] FLAG: --address="0.0.0.0" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907214 4947 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907223 4947 flags.go:64] FLAG: --anonymous-auth="true" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907228 4947 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907233 4947 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907238 4947 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907243 4947 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907248 4947 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907253 4947 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907257 4947 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907261 4947 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907265 4947 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907269 4947 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907273 4947 flags.go:64] FLAG: --cgroup-root="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907278 4947 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907282 4947 flags.go:64] FLAG: --client-ca-file="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907286 4947 flags.go:64] FLAG: --cloud-config="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907289 4947 flags.go:64] FLAG: --cloud-provider="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907293 4947 flags.go:64] FLAG: --cluster-dns="[]" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907298 4947 flags.go:64] FLAG: --cluster-domain="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907302 4947 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907306 4947 flags.go:64] FLAG: --config-dir="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907310 4947 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907314 4947 flags.go:64] FLAG: --container-log-max-files="5" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907319 4947 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907324 4947 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907328 4947 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907332 4947 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907336 4947 flags.go:64] FLAG: --contention-profiling="false" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907340 4947 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907344 4947 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907348 4947 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907354 4947 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907360 4947 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907364 4947 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907368 4947 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907372 4947 flags.go:64] FLAG: --enable-load-reader="false" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907376 4947 flags.go:64] FLAG: --enable-server="true" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907380 4947 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907386 4947 flags.go:64] FLAG: --event-burst="100" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907390 4947 flags.go:64] FLAG: --event-qps="50" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907394 4947 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907398 4947 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907403 4947 flags.go:64] FLAG: --eviction-hard="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907408 4947 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907412 4947 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907416 4947 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907421 4947 flags.go:64] FLAG: --eviction-soft="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907424 4947 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907428 4947 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907432 4947 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907436 4947 flags.go:64] FLAG: --experimental-mounter-path="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907440 4947 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907444 4947 flags.go:64] FLAG: --fail-swap-on="true" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907448 4947 flags.go:64] FLAG: --feature-gates="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907453 4947 flags.go:64] FLAG: --file-check-frequency="20s" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907457 4947 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907461 4947 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907465 4947 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907469 4947 flags.go:64] FLAG: --healthz-port="10248" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907473 4947 flags.go:64] FLAG: --help="false" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907477 4947 flags.go:64] FLAG: --hostname-override="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907481 4947 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907486 4947 flags.go:64] FLAG: --http-check-frequency="20s" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907490 4947 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907494 4947 flags.go:64] FLAG: --image-credential-provider-config="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907499 4947 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907502 4947 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907507 4947 flags.go:64] FLAG: --image-service-endpoint="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907511 4947 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907515 4947 flags.go:64] FLAG: --kube-api-burst="100" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907519 4947 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907524 4947 flags.go:64] FLAG: --kube-api-qps="50" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907528 4947 flags.go:64] FLAG: --kube-reserved="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907532 4947 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907536 4947 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907540 4947 flags.go:64] FLAG: --kubelet-cgroups="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907544 4947 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907548 4947 flags.go:64] FLAG: --lock-file="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907552 4947 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907557 4947 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907561 4947 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907567 4947 flags.go:64] FLAG: --log-json-split-stream="false" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907572 4947 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907576 4947 flags.go:64] FLAG: --log-text-split-stream="false" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907580 4947 flags.go:64] FLAG: --logging-format="text" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907584 4947 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907589 4947 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907593 4947 flags.go:64] FLAG: --manifest-url="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907597 4947 flags.go:64] FLAG: --manifest-url-header="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907603 4947 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907607 4947 flags.go:64] FLAG: --max-open-files="1000000" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907612 4947 flags.go:64] FLAG: --max-pods="110" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907617 4947 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907621 4947 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907625 4947 flags.go:64] FLAG: --memory-manager-policy="None" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907629 4947 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907634 4947 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907643 4947 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907647 4947 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907656 4947 flags.go:64] FLAG: --node-status-max-images="50" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907661 4947 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907665 4947 flags.go:64] FLAG: --oom-score-adj="-999" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907669 4947 flags.go:64] FLAG: --pod-cidr="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907673 4947 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907680 4947 flags.go:64] FLAG: --pod-manifest-path="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907683 4947 flags.go:64] FLAG: --pod-max-pids="-1" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907688 4947 flags.go:64] FLAG: --pods-per-core="0" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907692 4947 flags.go:64] FLAG: --port="10250" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907696 4947 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907721 4947 flags.go:64] FLAG: --provider-id="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907725 4947 flags.go:64] FLAG: --qos-reserved="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907730 4947 flags.go:64] FLAG: --read-only-port="10255" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907734 4947 flags.go:64] FLAG: --register-node="true" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907738 4947 flags.go:64] FLAG: --register-schedulable="true" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907742 4947 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907749 4947 flags.go:64] FLAG: --registry-burst="10" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907753 4947 flags.go:64] FLAG: --registry-qps="5" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907757 4947 flags.go:64] FLAG: --reserved-cpus="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907761 4947 flags.go:64] FLAG: --reserved-memory="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907766 4947 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907771 4947 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907775 4947 flags.go:64] FLAG: --rotate-certificates="false" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907779 4947 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907783 4947 flags.go:64] FLAG: --runonce="false" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907787 4947 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907791 4947 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907795 4947 flags.go:64] FLAG: --seccomp-default="false" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907799 4947 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907804 4947 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907808 4947 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907814 4947 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907818 4947 flags.go:64] FLAG: --storage-driver-password="root" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907822 4947 flags.go:64] FLAG: --storage-driver-secure="false" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907826 4947 flags.go:64] FLAG: --storage-driver-table="stats" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907830 4947 flags.go:64] FLAG: --storage-driver-user="root" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907834 4947 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907838 4947 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907842 4947 flags.go:64] FLAG: --system-cgroups="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907846 4947 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907854 4947 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907858 4947 flags.go:64] FLAG: --tls-cert-file="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907862 4947 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907866 4947 flags.go:64] FLAG: --tls-min-version="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907870 4947 flags.go:64] FLAG: --tls-private-key-file="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907874 4947 flags.go:64] FLAG: --topology-manager-policy="none" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907878 4947 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907882 4947 flags.go:64] FLAG: --topology-manager-scope="container" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907886 4947 flags.go:64] FLAG: --v="2" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907891 4947 flags.go:64] FLAG: --version="false" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907896 4947 flags.go:64] FLAG: --vmodule="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907901 4947 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907906 4947 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908002 4947 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908008 4947 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908012 4947 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908016 4947 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908020 4947 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908025 4947 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908030 4947 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908035 4947 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908040 4947 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908046 4947 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908052 4947 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908058 4947 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908063 4947 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908067 4947 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908072 4947 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908075 4947 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908079 4947 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908083 4947 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908086 4947 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908090 4947 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908094 4947 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908097 4947 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908101 4947 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908105 4947 feature_gate.go:330] unrecognized feature gate: Example Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908110 4947 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908113 4947 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908117 4947 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908134 4947 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908138 4947 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908142 4947 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908145 4947 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908149 4947 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908153 4947 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908156 4947 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908160 4947 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908165 4947 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908169 4947 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908172 4947 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908176 4947 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908180 4947 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908185 4947 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908189 4947 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908194 4947 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908198 4947 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908202 4947 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908205 4947 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908209 4947 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908212 4947 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908216 4947 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908220 4947 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908225 4947 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908228 4947 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908232 4947 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908237 4947 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908241 4947 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908245 4947 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908249 4947 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908253 4947 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908257 4947 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908261 4947 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908266 4947 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908270 4947 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908273 4947 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908277 4947 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908281 4947 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908284 4947 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908288 4947 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908292 4947 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908299 4947 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908302 4947 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908306 4947 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.908319 4947 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.917560 4947 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.917578 4947 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917645 4947 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917651 4947 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917655 4947 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917659 4947 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917663 4947 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917667 4947 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917671 4947 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917674 4947 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917678 4947 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917682 4947 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917686 4947 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917689 4947 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917693 4947 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917696 4947 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917700 4947 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917704 4947 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917708 4947 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917711 4947 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917715 4947 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917719 4947 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917723 4947 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917727 4947 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917731 4947 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917736 4947 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917742 4947 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917748 4947 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917754 4947 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917759 4947 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917763 4947 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917767 4947 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917771 4947 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917775 4947 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917778 4947 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917782 4947 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917787 4947 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917791 4947 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917795 4947 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917800 4947 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917805 4947 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917809 4947 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917812 4947 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917816 4947 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917820 4947 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917823 4947 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917827 4947 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917830 4947 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917833 4947 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917837 4947 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917841 4947 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917844 4947 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917848 4947 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917852 4947 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917856 4947 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917861 4947 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917866 4947 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917870 4947 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917874 4947 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917878 4947 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917882 4947 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917886 4947 feature_gate.go:330] unrecognized feature gate: Example Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917890 4947 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917893 4947 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917897 4947 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917901 4947 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917904 4947 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917908 4947 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917911 4947 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917915 4947 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917919 4947 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917922 4947 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917926 4947 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.917932 4947 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918029 4947 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918035 4947 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918040 4947 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918045 4947 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918049 4947 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918053 4947 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918056 4947 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918060 4947 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918064 4947 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918067 4947 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918072 4947 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918077 4947 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918080 4947 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918084 4947 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918088 4947 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918092 4947 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918096 4947 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918100 4947 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918103 4947 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918107 4947 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918110 4947 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918114 4947 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918118 4947 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918142 4947 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918146 4947 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918150 4947 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918153 4947 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918157 4947 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918160 4947 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918164 4947 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918167 4947 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918171 4947 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918175 4947 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918178 4947 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918188 4947 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918192 4947 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918196 4947 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918201 4947 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918205 4947 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918210 4947 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918215 4947 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918220 4947 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918225 4947 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918228 4947 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918232 4947 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918236 4947 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918239 4947 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918243 4947 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918247 4947 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918251 4947 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918255 4947 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918259 4947 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918264 4947 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918269 4947 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918273 4947 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918277 4947 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918281 4947 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918285 4947 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918289 4947 feature_gate.go:330] unrecognized feature gate: Example Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918292 4947 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918296 4947 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918299 4947 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918303 4947 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918307 4947 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918310 4947 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918314 4947 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918318 4947 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918322 4947 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918327 4947 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918331 4947 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918335 4947 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.918340 4947 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.918465 4947 server.go:940] "Client rotation is on, will bootstrap in background" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.922113 4947 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.922237 4947 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.922693 4947 server.go:997] "Starting client certificate rotation" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.922712 4947 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.922930 4947 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-11 15:44:47.555907348 +0000 UTC Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.923074 4947 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.928706 4947 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 25 00:09:20 crc kubenswrapper[4947]: E0125 00:09:20.931905 4947 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.935290 4947 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.946546 4947 log.go:25] "Validated CRI v1 runtime API" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.969407 4947 log.go:25] "Validated CRI v1 image API" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.971524 4947 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.974863 4947 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-25-00-04-45-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.974914 4947 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.002856 4947 manager.go:217] Machine: {Timestamp:2026-01-25 00:09:21.000512021 +0000 UTC m=+0.233502521 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:07b95270-97eb-4b89-897d-837b061280fd BootID:a468ef55-66d7-4612-bf14-5eff54a3bf14 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:65:36:04 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:65:36:04 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:6d:3e:57 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:b3:ad:bb Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:1c:9e:bc Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:b2:49:02 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:3e:b0:5f:4b:63:14 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:f6:41:79:18:98:94 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.003331 4947 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.003586 4947 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.004293 4947 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.004616 4947 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.004673 4947 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.005115 4947 topology_manager.go:138] "Creating topology manager with none policy" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.005164 4947 container_manager_linux.go:303] "Creating device plugin manager" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.005405 4947 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.005658 4947 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.006186 4947 state_mem.go:36] "Initialized new in-memory state store" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.006388 4947 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.007724 4947 kubelet.go:418] "Attempting to sync node with API server" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.007763 4947 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.007859 4947 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.007885 4947 kubelet.go:324] "Adding apiserver pod source" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.007906 4947 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.012819 4947 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 25 00:09:21 crc kubenswrapper[4947]: W0125 00:09:21.012858 4947 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Jan 25 00:09:21 crc kubenswrapper[4947]: E0125 00:09:21.012966 4947 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Jan 25 00:09:21 crc kubenswrapper[4947]: W0125 00:09:21.012968 4947 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Jan 25 00:09:21 crc kubenswrapper[4947]: E0125 00:09:21.013054 4947 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.013284 4947 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.014380 4947 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.015016 4947 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.015043 4947 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.015053 4947 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.015061 4947 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.015095 4947 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.015104 4947 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.015113 4947 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.015144 4947 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.015156 4947 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.015168 4947 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.015184 4947 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.015196 4947 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.015422 4947 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.015943 4947 server.go:1280] "Started kubelet" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.016659 4947 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.016788 4947 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.017401 4947 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 25 00:09:21 crc systemd[1]: Started Kubernetes Kubelet. Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.018812 4947 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Jan 25 00:09:21 crc kubenswrapper[4947]: E0125 00:09:21.019481 4947 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.163:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188dd0c22c898057 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-25 00:09:21.015914583 +0000 UTC m=+0.248905033,LastTimestamp:2026-01-25 00:09:21.015914583 +0000 UTC m=+0.248905033,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.020360 4947 server.go:460] "Adding debug handlers to kubelet server" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.021762 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.021886 4947 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.021925 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 23:11:46.709996903 +0000 UTC Jan 25 00:09:21 crc kubenswrapper[4947]: E0125 00:09:21.022175 4947 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.022852 4947 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.022873 4947 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.022946 4947 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 25 00:09:21 crc kubenswrapper[4947]: E0125 00:09:21.023712 4947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="200ms" Jan 25 00:09:21 crc kubenswrapper[4947]: W0125 00:09:21.023831 4947 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Jan 25 00:09:21 crc kubenswrapper[4947]: E0125 00:09:21.023879 4947 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.024168 4947 factory.go:55] Registering systemd factory Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.024210 4947 factory.go:221] Registration of the systemd container factory successfully Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.024733 4947 factory.go:153] Registering CRI-O factory Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.024773 4947 factory.go:221] Registration of the crio container factory successfully Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.024870 4947 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.024945 4947 factory.go:103] Registering Raw factory Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.024972 4947 manager.go:1196] Started watching for new ooms in manager Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.026601 4947 manager.go:319] Starting recovery of all containers Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.042742 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.042876 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.042904 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.042925 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.042947 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.042973 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.042993 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043014 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043040 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043060 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043079 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043100 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043149 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043175 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043195 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043219 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043238 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043258 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043278 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043299 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043318 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043338 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043360 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043381 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043403 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043424 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043448 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043473 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043538 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043608 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043628 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043690 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043710 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043730 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043750 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043769 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043790 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043811 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043830 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043851 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043907 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043929 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043949 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043970 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043990 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044009 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044031 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044054 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044073 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044093 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044185 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044210 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044239 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044261 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044283 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044317 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044339 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044361 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044382 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044402 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044424 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044445 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044464 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044486 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044504 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044525 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044542 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044562 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044584 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.045574 4947 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.045636 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.045747 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.045772 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.045838 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.045859 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.045882 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.045900 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.045930 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.045952 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.045980 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046001 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046021 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046042 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046060 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046083 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046103 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046123 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046183 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046202 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046224 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046245 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046264 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046282 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046302 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046324 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046346 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046365 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046387 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046412 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046481 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046501 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046526 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046545 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046564 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046586 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046614 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046640 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046662 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046682 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046704 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046725 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046748 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046769 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046790 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046812 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046833 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046852 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046871 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046889 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046908 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046931 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046952 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046973 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046993 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047013 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047093 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047114 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047160 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047180 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047200 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047220 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047244 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047264 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047288 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047307 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047327 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047347 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047366 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047387 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047406 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047429 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047450 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047470 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047492 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047514 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047535 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047557 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047578 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047598 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047618 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047675 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047696 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047721 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047740 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047762 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047783 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047806 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047826 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047851 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047872 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047894 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047918 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047939 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047962 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047982 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048005 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048024 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048043 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048064 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048083 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048108 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048151 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048171 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048255 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048273 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048294 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048312 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048331 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048350 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048371 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048391 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048411 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048433 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048451 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048472 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048493 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048513 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048533 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048552 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048571 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048592 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048613 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048632 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048652 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048670 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048690 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048709 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048729 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048750 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048767 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048788 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048807 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048833 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048852 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048871 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048890 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048909 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048930 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048949 4947 reconstruct.go:97] "Volume reconstruction finished" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048964 4947 reconciler.go:26] "Reconciler: start to sync state" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.064315 4947 manager.go:324] Recovery completed Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.083076 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.085272 4947 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.086386 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.086431 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.086446 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.087647 4947 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.087671 4947 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.087702 4947 state_mem.go:36] "Initialized new in-memory state store" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.088288 4947 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.088361 4947 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.088410 4947 kubelet.go:2335] "Starting kubelet main sync loop" Jan 25 00:09:21 crc kubenswrapper[4947]: E0125 00:09:21.088497 4947 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 25 00:09:21 crc kubenswrapper[4947]: W0125 00:09:21.089920 4947 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Jan 25 00:09:21 crc kubenswrapper[4947]: E0125 00:09:21.090011 4947 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.100789 4947 policy_none.go:49] "None policy: Start" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.101657 4947 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.101690 4947 state_mem.go:35] "Initializing new in-memory state store" Jan 25 00:09:21 crc kubenswrapper[4947]: E0125 00:09:21.122837 4947 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.155160 4947 manager.go:334] "Starting Device Plugin manager" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.155223 4947 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.155267 4947 server.go:79] "Starting device plugin registration server" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.155850 4947 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.155875 4947 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.156157 4947 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.156363 4947 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.156395 4947 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 25 00:09:21 crc kubenswrapper[4947]: E0125 00:09:21.167824 4947 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.188661 4947 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.188909 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.191169 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.191640 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.191662 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.192015 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.192374 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.192507 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.192970 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.193014 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.193030 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.193323 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.193607 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.193704 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.193926 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.193984 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.193999 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.194076 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.194094 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.194105 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.194284 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.194728 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.194787 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.195045 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.195090 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.195107 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.195195 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.195216 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.195225 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.195345 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.195404 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.195458 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.196096 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.196151 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.196231 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.196311 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.196332 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.196343 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.196477 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.196506 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.196846 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.196908 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.196929 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.197184 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.197238 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.197253 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:21 crc kubenswrapper[4947]: E0125 00:09:21.224557 4947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="400ms" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.251487 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.251562 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.251626 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.251659 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.251694 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.251744 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.251780 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.251810 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.251840 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.251920 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.251972 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.252010 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.252042 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.252073 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.252101 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.256792 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.258787 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.258886 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.258906 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.258946 4947 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 25 00:09:21 crc kubenswrapper[4947]: E0125 00:09:21.259513 4947 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.163:6443: connect: connection refused" node="crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353330 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353413 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353449 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353484 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353524 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353561 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353607 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353643 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353682 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353718 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353732 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353738 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353796 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353720 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.354059 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.354105 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.354175 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.354219 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.354259 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353837 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.354563 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353794 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353807 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353817 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.354619 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353823 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353878 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.354716 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.354715 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.354622 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.460237 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.462524 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.462555 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.462563 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.462588 4947 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 25 00:09:21 crc kubenswrapper[4947]: E0125 00:09:21.463158 4947 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.163:6443: connect: connection refused" node="crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.522863 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.527036 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.543082 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.566468 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: W0125 00:09:21.567724 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-fbb7dadb153c9a91b75f55e4a292c36345ee75d584515436f6c6788e1c35ff78 WatchSource:0}: Error finding container fbb7dadb153c9a91b75f55e4a292c36345ee75d584515436f6c6788e1c35ff78: Status 404 returned error can't find the container with id fbb7dadb153c9a91b75f55e4a292c36345ee75d584515436f6c6788e1c35ff78 Jan 25 00:09:21 crc kubenswrapper[4947]: W0125 00:09:21.569526 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-331aa06eaaf78b4ef385a69502120661488de4a4490bc487edbee40bc40cf3c1 WatchSource:0}: Error finding container 331aa06eaaf78b4ef385a69502120661488de4a4490bc487edbee40bc40cf3c1: Status 404 returned error can't find the container with id 331aa06eaaf78b4ef385a69502120661488de4a4490bc487edbee40bc40cf3c1 Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.572556 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: W0125 00:09:21.587041 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-78b6c1162d16175dbc873c561984530bfa1e22e7a83a11b4e0430cafc67fe6c1 WatchSource:0}: Error finding container 78b6c1162d16175dbc873c561984530bfa1e22e7a83a11b4e0430cafc67fe6c1: Status 404 returned error can't find the container with id 78b6c1162d16175dbc873c561984530bfa1e22e7a83a11b4e0430cafc67fe6c1 Jan 25 00:09:21 crc kubenswrapper[4947]: W0125 00:09:21.590503 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-0d724686ba079198403fd4cb80554349ad0e05c50edabb8b14fdaabf9708615c WatchSource:0}: Error finding container 0d724686ba079198403fd4cb80554349ad0e05c50edabb8b14fdaabf9708615c: Status 404 returned error can't find the container with id 0d724686ba079198403fd4cb80554349ad0e05c50edabb8b14fdaabf9708615c Jan 25 00:09:21 crc kubenswrapper[4947]: E0125 00:09:21.625705 4947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="800ms" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.863310 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.864664 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.864718 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.864736 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.864772 4947 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 25 00:09:21 crc kubenswrapper[4947]: E0125 00:09:21.865423 4947 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.163:6443: connect: connection refused" node="crc" Jan 25 00:09:21 crc kubenswrapper[4947]: W0125 00:09:21.926700 4947 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Jan 25 00:09:21 crc kubenswrapper[4947]: E0125 00:09:21.926800 4947 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.020454 4947 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.022821 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 18:16:24.143945422 +0000 UTC Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.098117 4947 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="e28d9e5ded99984c96a07848bed082c840a86b273e0809a7103e07e789b81147" exitCode=0 Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.098172 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"e28d9e5ded99984c96a07848bed082c840a86b273e0809a7103e07e789b81147"} Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.098303 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"fbb7dadb153c9a91b75f55e4a292c36345ee75d584515436f6c6788e1c35ff78"} Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.098425 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.100500 4947 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d" exitCode=0 Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.100588 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d"} Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.100629 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0d724686ba079198403fd4cb80554349ad0e05c50edabb8b14fdaabf9708615c"} Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.100754 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.101238 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.101276 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.101291 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.101612 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.101658 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.101673 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.103160 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567"} Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.103200 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"78b6c1162d16175dbc873c561984530bfa1e22e7a83a11b4e0430cafc67fe6c1"} Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.106074 4947 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923" exitCode=0 Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.106182 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923"} Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.106260 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a207852b7b1543c03dc47677866a9e1fad4f8df8a89f47ef87183436862ee696"} Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.106480 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.107462 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.107491 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.107501 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.111375 4947 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7f77f372463a9126e2e0c7904e58fcca5c3868b594d72878374d63b703decdb2" exitCode=0 Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.111466 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7f77f372463a9126e2e0c7904e58fcca5c3868b594d72878374d63b703decdb2"} Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.111548 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"331aa06eaaf78b4ef385a69502120661488de4a4490bc487edbee40bc40cf3c1"} Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.111753 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.112907 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.112948 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.112961 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.116569 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.117618 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.117668 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.117682 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:22 crc kubenswrapper[4947]: W0125 00:09:22.227096 4947 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Jan 25 00:09:22 crc kubenswrapper[4947]: E0125 00:09:22.227238 4947 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Jan 25 00:09:22 crc kubenswrapper[4947]: W0125 00:09:22.304224 4947 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Jan 25 00:09:22 crc kubenswrapper[4947]: E0125 00:09:22.304346 4947 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Jan 25 00:09:22 crc kubenswrapper[4947]: E0125 00:09:22.427591 4947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="1.6s" Jan 25 00:09:22 crc kubenswrapper[4947]: E0125 00:09:22.439246 4947 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.163:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188dd0c22c898057 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-25 00:09:21.015914583 +0000 UTC m=+0.248905033,LastTimestamp:2026-01-25 00:09:21.015914583 +0000 UTC m=+0.248905033,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 25 00:09:22 crc kubenswrapper[4947]: W0125 00:09:22.567073 4947 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Jan 25 00:09:22 crc kubenswrapper[4947]: E0125 00:09:22.567186 4947 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.666399 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.667931 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.667966 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.667975 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.667999 4947 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 25 00:09:22 crc kubenswrapper[4947]: E0125 00:09:22.668508 4947 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.163:6443: connect: connection refused" node="crc" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.945784 4947 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 25 00:09:22 crc kubenswrapper[4947]: E0125 00:09:22.947079 4947 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.023689 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 17:44:08.48680075 +0000 UTC Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.115985 4947 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f93db88442ef5460a399110b43ee9b68fa585bb81ec5430c8873dc2d4f3cf725" exitCode=0 Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.116068 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f93db88442ef5460a399110b43ee9b68fa585bb81ec5430c8873dc2d4f3cf725"} Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.116248 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.117229 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.117262 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.117274 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.119194 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"20298eba3286e5999a381eba946a8d66115b05b2c0b73c61c7c005aa95bd1f27"} Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.119367 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.120701 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.120735 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.120749 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.123769 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c18c77a13ec1895e24cdc4a6d652e02f94c48856ab226683b6908e6524a6d66b"} Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.123802 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"cf9ad4f367bb191845eecb4178669597ee6fc54195b6fa93f8fc17a4c7a43c7a"} Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.123817 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6c24388261b500b452e9b238428ea72fc01a87302083b6cddbbcc646b025b088"} Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.123909 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.124654 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.124699 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.124713 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.127897 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd"} Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.127929 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.127930 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3"} Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.128057 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157"} Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.128681 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.128709 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.128722 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.131118 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587"} Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.131248 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c"} Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.131261 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278"} Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.131273 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2"} Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.131283 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42"} Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.131379 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.132066 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.132116 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.132154 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.323452 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.682757 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.024468 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 02:03:37.932403812 +0000 UTC Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.138076 4947 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d10f9303fa6076a41afb25c4d043318794314c6cc59b46e03cd0bdd746b1e601" exitCode=0 Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.138293 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d10f9303fa6076a41afb25c4d043318794314c6cc59b46e03cd0bdd746b1e601"} Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.138374 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.138531 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.138557 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.138570 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.140239 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.140239 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.140363 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.140392 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.140314 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.140451 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.140624 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.140655 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.140676 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.268915 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.270917 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.270982 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.271004 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.271052 4947 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.661817 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:09:25 crc kubenswrapper[4947]: I0125 00:09:25.024721 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 21:02:17.516977459 +0000 UTC Jan 25 00:09:25 crc kubenswrapper[4947]: I0125 00:09:25.147573 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4a15a23addbc1b8cb1796b79902764e660dc4fc0580462bf7731757d5f1a0950"} Jan 25 00:09:25 crc kubenswrapper[4947]: I0125 00:09:25.147681 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9965e2f5bdf4e230aeba0b28f3a4fdcc763f1aff38f22bc0e6943c0743c74230"} Jan 25 00:09:25 crc kubenswrapper[4947]: I0125 00:09:25.147712 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:25 crc kubenswrapper[4947]: I0125 00:09:25.147781 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:25 crc kubenswrapper[4947]: I0125 00:09:25.149738 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:25 crc kubenswrapper[4947]: I0125 00:09:25.149812 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:25 crc kubenswrapper[4947]: I0125 00:09:25.149984 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:25 crc kubenswrapper[4947]: I0125 00:09:25.150027 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:25 crc kubenswrapper[4947]: I0125 00:09:25.150197 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:25 crc kubenswrapper[4947]: I0125 00:09:25.150429 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:26 crc kubenswrapper[4947]: I0125 00:09:26.024899 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 15:10:15.389053282 +0000 UTC Jan 25 00:09:26 crc kubenswrapper[4947]: I0125 00:09:26.161530 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cdcfd5fa8ec227693c167e99fd7b156bb6bbaddd092b65155ab263cbd8660322"} Jan 25 00:09:26 crc kubenswrapper[4947]: I0125 00:09:26.161603 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4ce3214b2ff8e7214adb833b87eca4bf7a980d278ddfa27b68ccbd2945bc47fc"} Jan 25 00:09:26 crc kubenswrapper[4947]: I0125 00:09:26.161627 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e4244155a87fddbbcf4f9cab51737b8e3761a3b81afab80397c8e9ffff0a28d6"} Jan 25 00:09:26 crc kubenswrapper[4947]: I0125 00:09:26.161666 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:26 crc kubenswrapper[4947]: I0125 00:09:26.163016 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:26 crc kubenswrapper[4947]: I0125 00:09:26.163058 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:26 crc kubenswrapper[4947]: I0125 00:09:26.163079 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:26 crc kubenswrapper[4947]: I0125 00:09:26.999348 4947 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 25 00:09:27 crc kubenswrapper[4947]: I0125 00:09:27.025061 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 11:26:45.394013713 +0000 UTC Jan 25 00:09:27 crc kubenswrapper[4947]: I0125 00:09:27.047453 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 25 00:09:27 crc kubenswrapper[4947]: I0125 00:09:27.047919 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:27 crc kubenswrapper[4947]: I0125 00:09:27.049836 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:27 crc kubenswrapper[4947]: I0125 00:09:27.049900 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:27 crc kubenswrapper[4947]: I0125 00:09:27.049934 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:27 crc kubenswrapper[4947]: I0125 00:09:27.166825 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:27 crc kubenswrapper[4947]: I0125 00:09:27.167839 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:27 crc kubenswrapper[4947]: I0125 00:09:27.167890 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:27 crc kubenswrapper[4947]: I0125 00:09:27.167900 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:27 crc kubenswrapper[4947]: I0125 00:09:27.567867 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:09:27 crc kubenswrapper[4947]: I0125 00:09:27.568235 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:27 crc kubenswrapper[4947]: I0125 00:09:27.570080 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:27 crc kubenswrapper[4947]: I0125 00:09:27.570170 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:27 crc kubenswrapper[4947]: I0125 00:09:27.570202 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:27 crc kubenswrapper[4947]: I0125 00:09:27.576511 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:09:28 crc kubenswrapper[4947]: I0125 00:09:28.026089 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 15:44:33.259414726 +0000 UTC Jan 25 00:09:28 crc kubenswrapper[4947]: I0125 00:09:28.170321 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:28 crc kubenswrapper[4947]: I0125 00:09:28.170658 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:28 crc kubenswrapper[4947]: I0125 00:09:28.171950 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:28 crc kubenswrapper[4947]: I0125 00:09:28.171996 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:28 crc kubenswrapper[4947]: I0125 00:09:28.172015 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:28 crc kubenswrapper[4947]: I0125 00:09:28.173170 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:28 crc kubenswrapper[4947]: I0125 00:09:28.173210 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:28 crc kubenswrapper[4947]: I0125 00:09:28.173227 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:28 crc kubenswrapper[4947]: I0125 00:09:28.309928 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 25 00:09:28 crc kubenswrapper[4947]: I0125 00:09:28.409040 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:09:29 crc kubenswrapper[4947]: I0125 00:09:29.027296 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 03:55:05.656734428 +0000 UTC Jan 25 00:09:29 crc kubenswrapper[4947]: I0125 00:09:29.179796 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:29 crc kubenswrapper[4947]: I0125 00:09:29.179834 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:29 crc kubenswrapper[4947]: I0125 00:09:29.181572 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:29 crc kubenswrapper[4947]: I0125 00:09:29.181627 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:29 crc kubenswrapper[4947]: I0125 00:09:29.181644 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:29 crc kubenswrapper[4947]: I0125 00:09:29.181744 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:29 crc kubenswrapper[4947]: I0125 00:09:29.181785 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:29 crc kubenswrapper[4947]: I0125 00:09:29.181808 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:29 crc kubenswrapper[4947]: I0125 00:09:29.489039 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:09:30 crc kubenswrapper[4947]: I0125 00:09:30.027507 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 23:13:24.071785701 +0000 UTC Jan 25 00:09:30 crc kubenswrapper[4947]: I0125 00:09:30.166667 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 25 00:09:30 crc kubenswrapper[4947]: I0125 00:09:30.182672 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:30 crc kubenswrapper[4947]: I0125 00:09:30.182765 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:30 crc kubenswrapper[4947]: I0125 00:09:30.184038 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:30 crc kubenswrapper[4947]: I0125 00:09:30.184095 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:30 crc kubenswrapper[4947]: I0125 00:09:30.184107 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:30 crc kubenswrapper[4947]: I0125 00:09:30.184535 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:30 crc kubenswrapper[4947]: I0125 00:09:30.184584 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:30 crc kubenswrapper[4947]: I0125 00:09:30.184603 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:31 crc kubenswrapper[4947]: I0125 00:09:31.027718 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 15:11:51.170274798 +0000 UTC Jan 25 00:09:31 crc kubenswrapper[4947]: E0125 00:09:31.167964 4947 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 25 00:09:32 crc kubenswrapper[4947]: I0125 00:09:32.028161 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 09:35:08.284027436 +0000 UTC Jan 25 00:09:32 crc kubenswrapper[4947]: I0125 00:09:32.489310 4947 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 25 00:09:32 crc kubenswrapper[4947]: I0125 00:09:32.489468 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 25 00:09:33 crc kubenswrapper[4947]: I0125 00:09:33.022737 4947 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 25 00:09:33 crc kubenswrapper[4947]: I0125 00:09:33.029044 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 23:47:21.693232188 +0000 UTC Jan 25 00:09:33 crc kubenswrapper[4947]: W0125 00:09:33.854226 4947 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 25 00:09:33 crc kubenswrapper[4947]: I0125 00:09:33.854386 4947 trace.go:236] Trace[1536081183]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Jan-2026 00:09:23.852) (total time: 10001ms): Jan 25 00:09:33 crc kubenswrapper[4947]: Trace[1536081183]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:09:33.854) Jan 25 00:09:33 crc kubenswrapper[4947]: Trace[1536081183]: [10.001658458s] [10.001658458s] END Jan 25 00:09:33 crc kubenswrapper[4947]: E0125 00:09:33.854425 4947 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 25 00:09:34 crc kubenswrapper[4947]: E0125 00:09:34.029157 4947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Jan 25 00:09:34 crc kubenswrapper[4947]: I0125 00:09:34.029251 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 23:37:11.138368969 +0000 UTC Jan 25 00:09:34 crc kubenswrapper[4947]: E0125 00:09:34.272202 4947 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 25 00:09:34 crc kubenswrapper[4947]: W0125 00:09:34.318406 4947 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 25 00:09:34 crc kubenswrapper[4947]: I0125 00:09:34.318546 4947 trace.go:236] Trace[529832670]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Jan-2026 00:09:24.316) (total time: 10001ms): Jan 25 00:09:34 crc kubenswrapper[4947]: Trace[529832670]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:09:34.318) Jan 25 00:09:34 crc kubenswrapper[4947]: Trace[529832670]: [10.001708631s] [10.001708631s] END Jan 25 00:09:34 crc kubenswrapper[4947]: E0125 00:09:34.318575 4947 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 25 00:09:34 crc kubenswrapper[4947]: I0125 00:09:34.662059 4947 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 25 00:09:34 crc kubenswrapper[4947]: I0125 00:09:34.662264 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 25 00:09:34 crc kubenswrapper[4947]: I0125 00:09:34.723717 4947 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 25 00:09:34 crc kubenswrapper[4947]: I0125 00:09:34.723803 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 25 00:09:35 crc kubenswrapper[4947]: I0125 00:09:35.030266 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 11:08:52.007240758 +0000 UTC Jan 25 00:09:36 crc kubenswrapper[4947]: I0125 00:09:36.030801 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 23:41:03.086785493 +0000 UTC Jan 25 00:09:37 crc kubenswrapper[4947]: I0125 00:09:37.031218 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 21:16:22.971767243 +0000 UTC Jan 25 00:09:37 crc kubenswrapper[4947]: I0125 00:09:37.472641 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:37 crc kubenswrapper[4947]: I0125 00:09:37.474622 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:37 crc kubenswrapper[4947]: I0125 00:09:37.474658 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:37 crc kubenswrapper[4947]: I0125 00:09:37.474668 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:37 crc kubenswrapper[4947]: I0125 00:09:37.474716 4947 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 25 00:09:37 crc kubenswrapper[4947]: E0125 00:09:37.481807 4947 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 25 00:09:38 crc kubenswrapper[4947]: I0125 00:09:38.032227 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 12:28:23.442228201 +0000 UTC Jan 25 00:09:38 crc kubenswrapper[4947]: I0125 00:09:38.090312 4947 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 25 00:09:38 crc kubenswrapper[4947]: I0125 00:09:38.346848 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 25 00:09:38 crc kubenswrapper[4947]: I0125 00:09:38.347117 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:38 crc kubenswrapper[4947]: I0125 00:09:38.348909 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:38 crc kubenswrapper[4947]: I0125 00:09:38.348999 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:38 crc kubenswrapper[4947]: I0125 00:09:38.349023 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:38 crc kubenswrapper[4947]: I0125 00:09:38.366184 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 25 00:09:38 crc kubenswrapper[4947]: I0125 00:09:38.415101 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:09:38 crc kubenswrapper[4947]: I0125 00:09:38.415322 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:38 crc kubenswrapper[4947]: I0125 00:09:38.416761 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:38 crc kubenswrapper[4947]: I0125 00:09:38.416809 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:38 crc kubenswrapper[4947]: I0125 00:09:38.416823 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.033105 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 09:43:52.734824074 +0000 UTC Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.210946 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.213193 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.213260 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.213283 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.671540 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.671810 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.673534 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.673595 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.673618 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.679275 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.722382 4947 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.725933 4947 trace.go:236] Trace[1964698199]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Jan-2026 00:09:25.411) (total time: 14314ms): Jan 25 00:09:39 crc kubenswrapper[4947]: Trace[1964698199]: ---"Objects listed" error: 14314ms (00:09:39.725) Jan 25 00:09:39 crc kubenswrapper[4947]: Trace[1964698199]: [14.314331381s] [14.314331381s] END Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.725954 4947 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.725967 4947 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.727604 4947 trace.go:236] Trace[1399948782]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Jan-2026 00:09:25.059) (total time: 14667ms): Jan 25 00:09:39 crc kubenswrapper[4947]: Trace[1399948782]: ---"Objects listed" error: 14667ms (00:09:39.727) Jan 25 00:09:39 crc kubenswrapper[4947]: Trace[1399948782]: [14.667614283s] [14.667614283s] END Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.727641 4947 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.774766 4947 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.800844 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.808555 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.857641 4947 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:48884->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.857726 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:48884->192.168.126.11:17697: read: connection reset by peer" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.019842 4947 apiserver.go:52] "Watching apiserver" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.031193 4947 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.031492 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.031978 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.031986 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.032047 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.032184 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.032249 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.032282 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.032298 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.032314 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.032321 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.033994 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 19:39:58.734447231 +0000 UTC Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.034903 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.035390 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.035775 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.035985 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.036722 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.037286 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.037951 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.038194 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.038308 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.061836 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.078694 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.087970 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.100990 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.115583 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.124684 4947 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.124971 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.128668 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.128722 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.128750 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.128778 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.128801 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.128823 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.128849 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.128879 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.128903 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.128925 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.128949 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.128972 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.128999 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129058 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129080 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129105 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129152 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129179 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129206 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129229 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129263 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129320 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129349 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129379 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129384 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129435 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129502 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129555 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129605 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129641 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129675 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129767 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129810 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129846 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129880 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129923 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129981 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.130025 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.130062 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.130102 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.130167 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.130203 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.130248 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.130284 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.130319 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.130358 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.130544 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.130577 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.130612 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.130840 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.130876 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.130912 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.130947 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131060 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131097 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131159 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131236 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131270 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131305 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131340 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131375 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131410 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131443 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131475 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131510 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131554 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131589 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131628 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131662 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131699 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131739 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131773 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131834 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131870 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131905 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131937 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131972 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132064 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132107 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132192 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132233 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132269 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132304 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132343 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132386 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132419 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132453 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132487 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132522 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132560 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132672 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132713 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132752 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132803 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132836 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.133034 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.133071 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.133104 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.133167 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.133204 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.133238 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.133272 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.133309 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129559 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.133344 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129817 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129893 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129914 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.130027 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.130070 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.130792 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131207 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131246 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131509 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131628 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131629 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131942 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132047 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132577 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132750 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.134066 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.134820 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.135708 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.135863 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.136774 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.136745 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.137459 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.137619 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.137576 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.136397 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.137765 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.136552 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.136583 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.136673 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.136718 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.137761 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.137913 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.133378 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.137965 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.137997 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138024 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138064 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138113 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138159 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138184 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138194 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138199 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138210 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.136352 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138376 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138439 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138502 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138560 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138617 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138676 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138735 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138793 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138852 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138909 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138962 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139016 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139071 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139181 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139245 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139304 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139362 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139418 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139473 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139528 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139591 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139656 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139716 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139776 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139872 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139925 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139980 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.140038 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.140093 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.140196 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.140257 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.140308 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.140374 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.140433 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.140489 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.140547 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.140607 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.140666 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.140724 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.140788 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.140868 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.141986 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.142063 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.142171 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.142234 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.142292 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.142351 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.142415 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.142478 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.142541 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.142602 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.142667 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.142768 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.143083 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.143184 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.143240 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.143328 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.143376 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.143514 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.143688 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.143743 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.143798 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.143887 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.143935 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.145386 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.145553 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.145584 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.146585 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.155773 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.156169 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.156385 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.156496 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.156602 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.156706 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138464 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139076 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139116 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.160138 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139110 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139370 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139508 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.160187 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139502 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139572 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139696 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.140272 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.140306 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.140501 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.141380 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.141511 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.141783 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.141828 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.141928 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.142073 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.142184 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.142323 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.160298 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.143012 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.144460 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.144476 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.144596 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.144683 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.144703 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.144717 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.144765 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.144813 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.145020 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.145046 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.145067 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.145314 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.145380 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.145169 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.145449 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.145744 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.145898 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.145969 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.146091 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.146199 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.146265 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.146449 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.146470 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.146466 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.146820 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.146889 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.147527 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.147986 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.148273 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.148530 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.149444 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.149505 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.149709 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.149816 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.150081 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.150323 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.150810 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.151113 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.151497 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.151946 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.152547 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.152643 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.152864 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.153027 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.153503 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.153658 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.153769 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.153780 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.153809 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.154086 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.153853 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.154876 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.155357 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.155392 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.155572 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.156195 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.156370 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.156612 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.156688 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.157097 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.157238 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.157283 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:09:40.657237646 +0000 UTC m=+19.890228286 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.157519 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.157534 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.158084 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.158354 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.158163 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.158545 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.158599 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.158857 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.158868 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.161353 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.161441 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.161508 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.161568 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.161621 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.161687 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.161745 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.161811 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.161883 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.161952 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.162006 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.162060 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.163377 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.163454 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.163489 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.163568 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.163595 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.163620 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.163652 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.163686 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.163717 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.163739 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.163770 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.163793 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.163813 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.163837 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.163971 4947 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.163986 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.163997 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164011 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164022 4947 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164033 4947 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164044 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164056 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164069 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164080 4947 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164090 4947 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164101 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164112 4947 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164121 4947 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164150 4947 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164165 4947 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164176 4947 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164188 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164200 4947 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164212 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164222 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164233 4947 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164247 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164259 4947 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164274 4947 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164285 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164294 4947 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164304 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164314 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164324 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164334 4947 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164352 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164361 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164371 4947 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164381 4947 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164393 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164402 4947 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164411 4947 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164420 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164431 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164442 4947 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164453 4947 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164462 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164472 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164482 4947 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164492 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164502 4947 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164513 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.164512 4947 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.164606 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:40.664583462 +0000 UTC m=+19.897574132 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164522 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164666 4947 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164695 4947 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164716 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164733 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164750 4947 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164766 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164783 4947 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164799 4947 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164815 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164829 4947 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164845 4947 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164861 4947 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164882 4947 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164901 4947 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164916 4947 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164931 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164947 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164961 4947 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164977 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164992 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165007 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165029 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165047 4947 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165062 4947 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165076 4947 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165094 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165112 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165158 4947 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165175 4947 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165191 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165204 4947 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165219 4947 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165234 4947 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165249 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165264 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165281 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165296 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165315 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165333 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165352 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165367 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165370 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165384 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165405 4947 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165420 4947 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165436 4947 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165451 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.165457 4947 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.165585 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:40.665551376 +0000 UTC m=+19.898542026 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165720 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165468 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165936 4947 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165951 4947 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165964 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166003 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166015 4947 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166025 4947 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166035 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166045 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166055 4947 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166067 4947 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166079 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166091 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166103 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166116 4947 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166139 4947 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166150 4947 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166161 4947 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166172 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166183 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166193 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166202 4947 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166212 4947 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166221 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166232 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166242 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166253 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166262 4947 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166272 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166281 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166290 4947 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166300 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166311 4947 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166320 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166331 4947 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166341 4947 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166351 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166361 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166814 4947 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.172857 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.174691 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.174974 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.175897 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.176719 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.177848 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.179055 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.179310 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.179332 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.179348 4947 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.179417 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:40.679394099 +0000 UTC m=+19.912384539 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.179556 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.180475 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.183525 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.183559 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.183647 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.183669 4947 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.183742 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:40.683718423 +0000 UTC m=+19.916708863 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.186734 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.186952 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.187427 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.188598 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.192296 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.193933 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.194871 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.195247 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.195415 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.195438 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.195692 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.195871 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.195873 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.196293 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.198576 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.198609 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.198648 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.199102 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.199878 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.200200 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.200286 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.200297 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.201258 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.201640 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.201952 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.202218 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.203182 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.204530 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.204710 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.204848 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.205780 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.206337 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.206817 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.206982 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.207235 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.207454 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.207533 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.208149 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.208515 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.208578 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.210247 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.211441 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.212826 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.216230 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.217080 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.217926 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.220358 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.223470 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.223815 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.224003 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.224235 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.225432 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.225712 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.229600 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.229989 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.233051 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.256079 4947 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587" exitCode=255 Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.256308 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587"} Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267251 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267339 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267389 4947 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267406 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267417 4947 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267428 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267441 4947 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267452 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267461 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267473 4947 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267486 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267498 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267511 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267522 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267534 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267544 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267554 4947 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267565 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267576 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267586 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267596 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267608 4947 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267617 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267626 4947 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267635 4947 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267648 4947 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267658 4947 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267668 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267678 4947 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267690 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267700 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267710 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267721 4947 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267734 4947 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267743 4947 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267822 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.285074 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.298004 4947 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.300580 4947 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.300734 4947 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.298420 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.300624 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.298276 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.300202 4947 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.302254 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.302324 4947 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.302380 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.302438 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.302506 4947 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.302578 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.302648 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.302713 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.302770 4947 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.302827 4947 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.302883 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.302940 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.302999 4947 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.303055 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.303110 4947 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.303213 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.303288 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.303356 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.303413 4947 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.303472 4947 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.311222 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.315959 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.316313 4947 scope.go:117] "RemoveContainer" containerID="86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.327586 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.333114 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.355682 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.368855 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.370494 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.376810 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.381033 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.391055 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.404453 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.404481 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.404491 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.404502 4947 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.405555 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 00:09:40 crc kubenswrapper[4947]: W0125 00:09:40.417740 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-89c43ee14330bafa5ccf697e907c218ae9778f4e3b321f71a8480da557845b5a WatchSource:0}: Error finding container 89c43ee14330bafa5ccf697e907c218ae9778f4e3b321f71a8480da557845b5a: Status 404 returned error can't find the container with id 89c43ee14330bafa5ccf697e907c218ae9778f4e3b321f71a8480da557845b5a Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.421422 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.443705 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.707820 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.708058 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:09:41.708021725 +0000 UTC m=+20.941012175 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.708196 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.708249 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.708284 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.708306 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.708408 4947 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.708429 4947 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.708446 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.708454 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.708466 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.708474 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.708481 4947 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.708490 4947 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.708518 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:41.708490906 +0000 UTC m=+20.941481386 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.708549 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:41.708534687 +0000 UTC m=+20.941525167 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.708579 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:41.708567758 +0000 UTC m=+20.941558238 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.708600 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:41.708589748 +0000 UTC m=+20.941580228 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.034359 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 21:57:31.093080334 +0000 UTC Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.092049 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.092728 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.093800 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.094571 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.095294 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.095810 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.097612 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.098228 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.099326 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.099865 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.100818 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.101625 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.102183 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.103050 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.103555 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.104438 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.104999 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.105460 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.106473 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.107011 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.107850 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.108432 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.108838 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.109844 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.110312 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.111394 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.112000 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.112892 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.113664 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.114634 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.115204 4947 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.115325 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.117256 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.118301 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.118809 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.120559 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.121877 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.122535 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.123610 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.124396 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.125469 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.126054 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.126104 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.127100 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.127889 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.128685 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.129207 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.130018 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.130709 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.131536 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.131991 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.132779 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.133284 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.133825 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.134607 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.145262 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.164804 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.192795 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.207817 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.253055 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.259618 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6"} Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.259668 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8"} Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.259680 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"497539bc10f42d322f3cc495f6e37e610eb1b2660005af423a8f5e9c051c1cbe"} Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.261795 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717"} Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.261825 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"33557528254a67298d143fc3bc0c98a21c1500acadc8b9119eb0339f05944f10"} Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.263891 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.265801 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59"} Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.265974 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.267156 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"89c43ee14330bafa5ccf697e907c218ae9778f4e3b321f71a8480da557845b5a"} Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.299488 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.319886 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.339742 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.357357 4947 csr.go:261] certificate signing request csr-lwkwk is approved, waiting to be issued Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.363423 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.368457 4947 csr.go:257] certificate signing request csr-lwkwk is issued Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.393160 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.412690 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.430791 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.465222 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.492629 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.505770 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.717895 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.717974 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.718000 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.718018 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.718037 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:09:41 crc kubenswrapper[4947]: E0125 00:09:41.718176 4947 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 00:09:41 crc kubenswrapper[4947]: E0125 00:09:41.718217 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:09:43.718194284 +0000 UTC m=+22.951184724 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:09:41 crc kubenswrapper[4947]: E0125 00:09:41.718288 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:43.718261935 +0000 UTC m=+22.951252375 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 00:09:41 crc kubenswrapper[4947]: E0125 00:09:41.718338 4947 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 00:09:41 crc kubenswrapper[4947]: E0125 00:09:41.718447 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 00:09:41 crc kubenswrapper[4947]: E0125 00:09:41.718522 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 00:09:41 crc kubenswrapper[4947]: E0125 00:09:41.718550 4947 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:41 crc kubenswrapper[4947]: E0125 00:09:41.718461 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:43.718441709 +0000 UTC m=+22.951432139 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 00:09:41 crc kubenswrapper[4947]: E0125 00:09:41.718342 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 00:09:41 crc kubenswrapper[4947]: E0125 00:09:41.718656 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 00:09:41 crc kubenswrapper[4947]: E0125 00:09:41.718675 4947 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:41 crc kubenswrapper[4947]: E0125 00:09:41.718671 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:43.718632504 +0000 UTC m=+22.951623064 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:41 crc kubenswrapper[4947]: E0125 00:09:41.718746 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:43.718715416 +0000 UTC m=+22.951705896 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.034836 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 08:43:57.201057183 +0000 UTC Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.089509 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.089611 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.089668 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:42 crc kubenswrapper[4947]: E0125 00:09:42.089731 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:09:42 crc kubenswrapper[4947]: E0125 00:09:42.089855 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:09:42 crc kubenswrapper[4947]: E0125 00:09:42.089972 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.255495 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-2w6nd"] Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.255920 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-2w6nd" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.259883 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-mdgrh"] Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.260403 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:09:42 crc kubenswrapper[4947]: W0125 00:09:42.264017 4947 reflector.go:561] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": failed to list *v1.Secret: secrets "node-resolver-dockercfg-kz9s7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Jan 25 00:09:42 crc kubenswrapper[4947]: W0125 00:09:42.264051 4947 reflector.go:561] object-"openshift-machine-config-operator"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Jan 25 00:09:42 crc kubenswrapper[4947]: E0125 00:09:42.264078 4947 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 25 00:09:42 crc kubenswrapper[4947]: E0125 00:09:42.264078 4947 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"node-resolver-dockercfg-kz9s7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-resolver-dockercfg-kz9s7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 25 00:09:42 crc kubenswrapper[4947]: W0125 00:09:42.264194 4947 reflector.go:561] object-"openshift-dns"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Jan 25 00:09:42 crc kubenswrapper[4947]: E0125 00:09:42.264259 4947 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.264732 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.264847 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.265303 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.266099 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.269413 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.284957 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.306259 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.322505 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.338560 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.353810 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.370293 4947 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-25 00:04:41 +0000 UTC, rotation deadline is 2026-11-11 17:24:01.652225139 +0000 UTC Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.370346 4947 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6977h14m19.281882658s for next certificate rotation Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.371078 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.390965 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.407549 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.421850 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.427678 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxfgl\" (UniqueName: \"kubernetes.io/projected/0a5c5a9a-cc45-4715-8e37-35798d843870-kube-api-access-gxfgl\") pod \"node-resolver-2w6nd\" (UID: \"0a5c5a9a-cc45-4715-8e37-35798d843870\") " pod="openshift-dns/node-resolver-2w6nd" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.428016 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5f67ec28-baae-409e-a42d-03a486e7a26b-proxy-tls\") pod \"machine-config-daemon-mdgrh\" (UID: \"5f67ec28-baae-409e-a42d-03a486e7a26b\") " pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.428055 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqztm\" (UniqueName: \"kubernetes.io/projected/5f67ec28-baae-409e-a42d-03a486e7a26b-kube-api-access-hqztm\") pod \"machine-config-daemon-mdgrh\" (UID: \"5f67ec28-baae-409e-a42d-03a486e7a26b\") " pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.428093 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0a5c5a9a-cc45-4715-8e37-35798d843870-hosts-file\") pod \"node-resolver-2w6nd\" (UID: \"0a5c5a9a-cc45-4715-8e37-35798d843870\") " pod="openshift-dns/node-resolver-2w6nd" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.428119 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5f67ec28-baae-409e-a42d-03a486e7a26b-mcd-auth-proxy-config\") pod \"machine-config-daemon-mdgrh\" (UID: \"5f67ec28-baae-409e-a42d-03a486e7a26b\") " pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.428183 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5f67ec28-baae-409e-a42d-03a486e7a26b-rootfs\") pod \"machine-config-daemon-mdgrh\" (UID: \"5f67ec28-baae-409e-a42d-03a486e7a26b\") " pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.436298 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.454789 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.470152 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.485343 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.496327 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.505841 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.525837 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.529045 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxfgl\" (UniqueName: \"kubernetes.io/projected/0a5c5a9a-cc45-4715-8e37-35798d843870-kube-api-access-gxfgl\") pod \"node-resolver-2w6nd\" (UID: \"0a5c5a9a-cc45-4715-8e37-35798d843870\") " pod="openshift-dns/node-resolver-2w6nd" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.529107 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5f67ec28-baae-409e-a42d-03a486e7a26b-proxy-tls\") pod \"machine-config-daemon-mdgrh\" (UID: \"5f67ec28-baae-409e-a42d-03a486e7a26b\") " pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.529147 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqztm\" (UniqueName: \"kubernetes.io/projected/5f67ec28-baae-409e-a42d-03a486e7a26b-kube-api-access-hqztm\") pod \"machine-config-daemon-mdgrh\" (UID: \"5f67ec28-baae-409e-a42d-03a486e7a26b\") " pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.529184 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0a5c5a9a-cc45-4715-8e37-35798d843870-hosts-file\") pod \"node-resolver-2w6nd\" (UID: \"0a5c5a9a-cc45-4715-8e37-35798d843870\") " pod="openshift-dns/node-resolver-2w6nd" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.529208 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5f67ec28-baae-409e-a42d-03a486e7a26b-mcd-auth-proxy-config\") pod \"machine-config-daemon-mdgrh\" (UID: \"5f67ec28-baae-409e-a42d-03a486e7a26b\") " pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.529243 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5f67ec28-baae-409e-a42d-03a486e7a26b-rootfs\") pod \"machine-config-daemon-mdgrh\" (UID: \"5f67ec28-baae-409e-a42d-03a486e7a26b\") " pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.529319 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5f67ec28-baae-409e-a42d-03a486e7a26b-rootfs\") pod \"machine-config-daemon-mdgrh\" (UID: \"5f67ec28-baae-409e-a42d-03a486e7a26b\") " pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.529414 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0a5c5a9a-cc45-4715-8e37-35798d843870-hosts-file\") pod \"node-resolver-2w6nd\" (UID: \"0a5c5a9a-cc45-4715-8e37-35798d843870\") " pod="openshift-dns/node-resolver-2w6nd" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.534840 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5f67ec28-baae-409e-a42d-03a486e7a26b-proxy-tls\") pod \"machine-config-daemon-mdgrh\" (UID: \"5f67ec28-baae-409e-a42d-03a486e7a26b\") " pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.547110 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqztm\" (UniqueName: \"kubernetes.io/projected/5f67ec28-baae-409e-a42d-03a486e7a26b-kube-api-access-hqztm\") pod \"machine-config-daemon-mdgrh\" (UID: \"5f67ec28-baae-409e-a42d-03a486e7a26b\") " pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.552115 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.569664 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.583732 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.655264 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-kb5q7"] Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.655962 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:42 crc kubenswrapper[4947]: W0125 00:09:42.658104 4947 reflector.go:561] object-"openshift-multus"/"cni-copy-resources": failed to list *v1.ConfigMap: configmaps "cni-copy-resources" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 25 00:09:42 crc kubenswrapper[4947]: E0125 00:09:42.658218 4947 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"cni-copy-resources\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cni-copy-resources\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.658265 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fvfwz"] Jan 25 00:09:42 crc kubenswrapper[4947]: W0125 00:09:42.658493 4947 reflector.go:561] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": failed to list *v1.Secret: secrets "multus-ancillary-tools-dockercfg-vnmsz" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 25 00:09:42 crc kubenswrapper[4947]: E0125 00:09:42.658533 4947 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-vnmsz\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"multus-ancillary-tools-dockercfg-vnmsz\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 25 00:09:42 crc kubenswrapper[4947]: W0125 00:09:42.658644 4947 reflector.go:561] object-"openshift-multus"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 25 00:09:42 crc kubenswrapper[4947]: E0125 00:09:42.658682 4947 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 25 00:09:42 crc kubenswrapper[4947]: W0125 00:09:42.658774 4947 reflector.go:561] object-"openshift-multus"/"default-cni-sysctl-allowlist": failed to list *v1.ConfigMap: configmaps "default-cni-sysctl-allowlist" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 25 00:09:42 crc kubenswrapper[4947]: E0125 00:09:42.658807 4947 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"default-cni-sysctl-allowlist\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.659217 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.659777 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.662099 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.662355 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.662437 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-9fspn"] Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.662673 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.662695 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.662937 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.663356 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.664172 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.664263 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.666004 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.667191 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.679217 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.692003 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.707450 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.727015 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.743629 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.754939 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.767731 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.788468 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.813625 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.828743 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.833432 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-etc-openvswitch\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.833517 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-cni-binary-copy\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.833556 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2d914454-2c17-47f2-aa53-aba3bfaad296-cni-binary-copy\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.833653 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-host-run-k8s-cni-cncf-io\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.833735 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-kubelet\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.833813 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-run-openvswitch\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.833889 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-run-ovn-kubernetes\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.834708 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2d914454-2c17-47f2-aa53-aba3bfaad296-multus-daemon-config\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.834772 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9x7t\" (UniqueName: \"kubernetes.io/projected/2d914454-2c17-47f2-aa53-aba3bfaad296-kube-api-access-h9x7t\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.834808 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.834841 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-run-ovn\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.834946 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-multus-cni-dir\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835017 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-run-systemd\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835118 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-host-run-netns\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835206 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-host-var-lib-cni-multus\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835257 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-log-socket\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835309 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8bf5f940-5287-40f1-b208-535cdfcb0054-ovn-node-metrics-cert\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835385 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-host-var-lib-kubelet\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835431 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-host-run-multus-certs\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835467 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-etc-kubernetes\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835502 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-cni-bin\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835543 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sssz9\" (UniqueName: \"kubernetes.io/projected/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-kube-api-access-sssz9\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835585 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-system-cni-dir\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835619 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-cnibin\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835696 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-var-lib-openvswitch\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835730 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-hostroot\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835762 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-systemd-units\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835824 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-run-netns\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835863 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-multus-conf-dir\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835893 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8bf5f940-5287-40f1-b208-535cdfcb0054-env-overrides\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835940 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-os-release\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835989 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.836022 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-os-release\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.836058 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh6bp\" (UniqueName: \"kubernetes.io/projected/8bf5f940-5287-40f1-b208-535cdfcb0054-kube-api-access-xh6bp\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.836089 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.836117 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-host-var-lib-cni-bin\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.836202 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-slash\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.836234 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-system-cni-dir\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.836264 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-cni-netd\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.836296 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8bf5f940-5287-40f1-b208-535cdfcb0054-ovnkube-config\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.836372 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8bf5f940-5287-40f1-b208-535cdfcb0054-ovnkube-script-lib\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.836489 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-multus-socket-dir-parent\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.836535 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-node-log\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.836579 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-cnibin\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.846819 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.862352 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.879343 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.890954 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.912764 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.928757 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.937561 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.937625 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-host-var-lib-cni-bin\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.937663 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-slash\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.937701 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-system-cni-dir\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.937734 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-cni-netd\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.937772 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8bf5f940-5287-40f1-b208-535cdfcb0054-ovnkube-config\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.937782 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-host-var-lib-cni-bin\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.937816 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8bf5f940-5287-40f1-b208-535cdfcb0054-ovnkube-script-lib\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.937802 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-slash\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.937841 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-cni-netd\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.937870 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-system-cni-dir\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.938193 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-multus-socket-dir-parent\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.938314 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-node-log\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.938203 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.938449 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-cnibin\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.938505 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-cnibin\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.938214 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-multus-socket-dir-parent\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.938460 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-node-log\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.938526 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-etc-openvswitch\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.938712 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-cni-binary-copy\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.938848 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2d914454-2c17-47f2-aa53-aba3bfaad296-cni-binary-copy\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.938927 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-host-run-k8s-cni-cncf-io\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.938991 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-kubelet\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.938637 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-etc-openvswitch\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939037 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-host-run-k8s-cni-cncf-io\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939075 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-kubelet\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.938729 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8bf5f940-5287-40f1-b208-535cdfcb0054-ovnkube-script-lib\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939169 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-run-openvswitch\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939200 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8bf5f940-5287-40f1-b208-535cdfcb0054-ovnkube-config\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939276 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-run-openvswitch\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939350 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-run-ovn-kubernetes\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939449 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2d914454-2c17-47f2-aa53-aba3bfaad296-multus-daemon-config\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939523 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9x7t\" (UniqueName: \"kubernetes.io/projected/2d914454-2c17-47f2-aa53-aba3bfaad296-kube-api-access-h9x7t\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939595 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939654 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939672 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-run-ovn\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939746 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-multus-cni-dir\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939455 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-run-ovn-kubernetes\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939816 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-run-systemd\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939876 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-host-run-netns\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939899 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-run-systemd\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939920 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-host-var-lib-cni-multus\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939945 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-host-run-netns\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939951 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-multus-cni-dir\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939958 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-log-socket\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939977 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-host-var-lib-cni-multus\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940000 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-log-socket\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940009 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8bf5f940-5287-40f1-b208-535cdfcb0054-ovn-node-metrics-cert\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940036 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-host-var-lib-kubelet\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940056 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-host-run-multus-certs\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940071 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-etc-kubernetes\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940085 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-cni-bin\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940105 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sssz9\" (UniqueName: \"kubernetes.io/projected/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-kube-api-access-sssz9\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940138 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-system-cni-dir\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940154 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-cnibin\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940154 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-host-run-multus-certs\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940177 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-var-lib-openvswitch\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940183 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-etc-kubernetes\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940210 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-hostroot\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940195 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-host-var-lib-kubelet\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940236 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-system-cni-dir\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940242 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-var-lib-openvswitch\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940192 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-hostroot\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940250 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-cnibin\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940268 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-cni-bin\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940272 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-systemd-units\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940298 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-run-netns\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940318 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-multus-conf-dir\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940338 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8bf5f940-5287-40f1-b208-535cdfcb0054-env-overrides\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940341 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2d914454-2c17-47f2-aa53-aba3bfaad296-multus-daemon-config\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940358 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-run-netns\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940397 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-os-release\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940418 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-multus-conf-dir\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940421 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940500 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-os-release\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940539 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh6bp\" (UniqueName: \"kubernetes.io/projected/8bf5f940-5287-40f1-b208-535cdfcb0054-kube-api-access-xh6bp\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940647 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-os-release\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940659 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-os-release\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940696 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8bf5f940-5287-40f1-b208-535cdfcb0054-env-overrides\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940328 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-systemd-units\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940998 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-run-ovn\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.945853 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8bf5f940-5287-40f1-b208-535cdfcb0054-ovn-node-metrics-cert\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.962612 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.974522 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh6bp\" (UniqueName: \"kubernetes.io/projected/8bf5f940-5287-40f1-b208-535cdfcb0054-kube-api-access-xh6bp\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.982181 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.995886 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.027854 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.035158 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 22:00:45.869167835 +0000 UTC Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.045095 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.061747 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.079446 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.097633 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.112548 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.225494 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.230654 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5f67ec28-baae-409e-a42d-03a486e7a26b-mcd-auth-proxy-config\") pod \"machine-config-daemon-mdgrh\" (UID: \"5f67ec28-baae-409e-a42d-03a486e7a26b\") " pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.240824 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.250220 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxfgl\" (UniqueName: \"kubernetes.io/projected/0a5c5a9a-cc45-4715-8e37-35798d843870-kube-api-access-gxfgl\") pod \"node-resolver-2w6nd\" (UID: \"0a5c5a9a-cc45-4715-8e37-35798d843870\") " pod="openshift-dns/node-resolver-2w6nd" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.273402 4947 generic.go:334] "Generic (PLEG): container finished" podID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerID="dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1" exitCode=0 Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.273457 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerDied","Data":"dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1"} Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.273504 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerStarted","Data":"ab91914d18e527be722f5e70489e90096dc0e627d44b69e63be506f96778e303"} Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.281913 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.288204 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.302312 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.314073 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.345232 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.362737 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.376991 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.391217 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.410232 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.431886 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.455212 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.469364 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-2w6nd" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.474592 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.474668 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: W0125 00:09:43.484727 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a5c5a9a_cc45_4715_8e37_35798d843870.slice/crio-03527a92fe9240a557dbc6fbf1e9cbdf74839b42e50ef361d8b9ea5a1de1de14 WatchSource:0}: Error finding container 03527a92fe9240a557dbc6fbf1e9cbdf74839b42e50ef361d8b9ea5a1de1de14: Status 404 returned error can't find the container with id 03527a92fe9240a557dbc6fbf1e9cbdf74839b42e50ef361d8b9ea5a1de1de14 Jan 25 00:09:43 crc kubenswrapper[4947]: W0125 00:09:43.491812 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f67ec28_baae_409e_a42d_03a486e7a26b.slice/crio-69cff864ded933b145fe6c03e7145916170a9b1b8617c4fb88bf02bb79d2e72c WatchSource:0}: Error finding container 69cff864ded933b145fe6c03e7145916170a9b1b8617c4fb88bf02bb79d2e72c: Status 404 returned error can't find the container with id 69cff864ded933b145fe6c03e7145916170a9b1b8617c4fb88bf02bb79d2e72c Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.492245 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.508227 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.560827 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.569762 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2d914454-2c17-47f2-aa53-aba3bfaad296-cni-binary-copy\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.569980 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-cni-binary-copy\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.624810 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.641081 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.647390 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9x7t\" (UniqueName: \"kubernetes.io/projected/2d914454-2c17-47f2-aa53-aba3bfaad296-kube-api-access-h9x7t\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.647829 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sssz9\" (UniqueName: \"kubernetes.io/projected/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-kube-api-access-sssz9\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.746654 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.746757 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.746784 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.746807 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.746847 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:09:43 crc kubenswrapper[4947]: E0125 00:09:43.746992 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 00:09:43 crc kubenswrapper[4947]: E0125 00:09:43.747011 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 00:09:43 crc kubenswrapper[4947]: E0125 00:09:43.747027 4947 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:43 crc kubenswrapper[4947]: E0125 00:09:43.747075 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:47.747057757 +0000 UTC m=+26.980048197 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:43 crc kubenswrapper[4947]: E0125 00:09:43.747436 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:09:47.747426296 +0000 UTC m=+26.980416736 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:09:43 crc kubenswrapper[4947]: E0125 00:09:43.747474 4947 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 00:09:43 crc kubenswrapper[4947]: E0125 00:09:43.747498 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:47.747491967 +0000 UTC m=+26.980482407 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 00:09:43 crc kubenswrapper[4947]: E0125 00:09:43.747541 4947 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 00:09:43 crc kubenswrapper[4947]: E0125 00:09:43.747561 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:47.747555059 +0000 UTC m=+26.980545499 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 00:09:43 crc kubenswrapper[4947]: E0125 00:09:43.747601 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 00:09:43 crc kubenswrapper[4947]: E0125 00:09:43.747610 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 00:09:43 crc kubenswrapper[4947]: E0125 00:09:43.747617 4947 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:43 crc kubenswrapper[4947]: E0125 00:09:43.747639 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:47.747629901 +0000 UTC m=+26.980620341 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.882953 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.884876 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.884942 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.884999 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.885207 4947 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.887538 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9fspn" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.893959 4947 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.894232 4947 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.895083 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.895113 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.895122 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.895156 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.895167 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:43Z","lastTransitionTime":"2026-01-25T00:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:43 crc kubenswrapper[4947]: W0125 00:09:43.905701 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d914454_2c17_47f2_aa53_aba3bfaad296.slice/crio-21be1f792542582a68e25cc335cf9f353faba700a027f5b07bf846179cf28c34 WatchSource:0}: Error finding container 21be1f792542582a68e25cc335cf9f353faba700a027f5b07bf846179cf28c34: Status 404 returned error can't find the container with id 21be1f792542582a68e25cc335cf9f353faba700a027f5b07bf846179cf28c34 Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.940030 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.941992 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:43 crc kubenswrapper[4947]: E0125 00:09:43.990143 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.001662 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.001705 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.001715 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.001735 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.002084 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:44Z","lastTransitionTime":"2026-01-25T00:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:44 crc kubenswrapper[4947]: E0125 00:09:44.030817 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.034243 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.034278 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.034291 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.034308 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.034321 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:44Z","lastTransitionTime":"2026-01-25T00:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.035516 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 06:31:24.387800273 +0000 UTC Jan 25 00:09:44 crc kubenswrapper[4947]: E0125 00:09:44.046980 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.052489 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.052555 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.052576 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.052604 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.052617 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:44Z","lastTransitionTime":"2026-01-25T00:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:44 crc kubenswrapper[4947]: E0125 00:09:44.065214 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.069244 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.069293 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.069305 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.069325 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.069339 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:44Z","lastTransitionTime":"2026-01-25T00:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:44 crc kubenswrapper[4947]: E0125 00:09:44.080890 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: E0125 00:09:44.081007 4947 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.083168 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.083215 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.083227 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.083259 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.083271 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:44Z","lastTransitionTime":"2026-01-25T00:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.090064 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.090096 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.090080 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:44 crc kubenswrapper[4947]: E0125 00:09:44.090214 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:09:44 crc kubenswrapper[4947]: E0125 00:09:44.090318 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:09:44 crc kubenswrapper[4947]: E0125 00:09:44.090394 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.171250 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.185858 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.185900 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.185911 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.185927 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.185937 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:44Z","lastTransitionTime":"2026-01-25T00:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:44 crc kubenswrapper[4947]: W0125 00:09:44.190374 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb89f0c74_3c8d_4e3f_8065_9e25a6749dcb.slice/crio-fa247dc61cc46d54df17176fa390797ceda197ce4896459f830c403b7bd873a1 WatchSource:0}: Error finding container fa247dc61cc46d54df17176fa390797ceda197ce4896459f830c403b7bd873a1: Status 404 returned error can't find the container with id fa247dc61cc46d54df17176fa390797ceda197ce4896459f830c403b7bd873a1 Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.282845 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" event={"ID":"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb","Type":"ContainerStarted","Data":"fa247dc61cc46d54df17176fa390797ceda197ce4896459f830c403b7bd873a1"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.285770 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9fspn" event={"ID":"2d914454-2c17-47f2-aa53-aba3bfaad296","Type":"ContainerStarted","Data":"e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.285806 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9fspn" event={"ID":"2d914454-2c17-47f2-aa53-aba3bfaad296","Type":"ContainerStarted","Data":"21be1f792542582a68e25cc335cf9f353faba700a027f5b07bf846179cf28c34"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.289800 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.289847 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.289859 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.289879 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.289896 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:44Z","lastTransitionTime":"2026-01-25T00:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.290668 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.295814 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerStarted","Data":"938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.295866 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerStarted","Data":"d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.295885 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerStarted","Data":"69cff864ded933b145fe6c03e7145916170a9b1b8617c4fb88bf02bb79d2e72c"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.299016 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-2w6nd" event={"ID":"0a5c5a9a-cc45-4715-8e37-35798d843870","Type":"ContainerStarted","Data":"8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.299052 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-2w6nd" event={"ID":"0a5c5a9a-cc45-4715-8e37-35798d843870","Type":"ContainerStarted","Data":"03527a92fe9240a557dbc6fbf1e9cbdf74839b42e50ef361d8b9ea5a1de1de14"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.303308 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.305156 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerStarted","Data":"f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.305215 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerStarted","Data":"ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.305231 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerStarted","Data":"dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.305245 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerStarted","Data":"8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.305260 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerStarted","Data":"dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.305273 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerStarted","Data":"d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.318956 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.335362 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.353336 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.374454 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.391506 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.393713 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.393757 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.393770 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.393792 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.393806 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:44Z","lastTransitionTime":"2026-01-25T00:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.410902 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.431071 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.447172 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.465346 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.483618 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.494822 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.497244 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.497289 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.497307 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.497334 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.497352 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:44Z","lastTransitionTime":"2026-01-25T00:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.517362 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.544639 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.563025 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.576290 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.592156 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.605721 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.605799 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.605846 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.605704 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.605872 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.605890 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:44Z","lastTransitionTime":"2026-01-25T00:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.627382 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.642588 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.657098 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.670355 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.681085 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.693670 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.705998 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.708707 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.708740 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.708749 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.708767 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.708777 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:44Z","lastTransitionTime":"2026-01-25T00:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.719824 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.811980 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.812341 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.812488 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.812630 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.812787 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:44Z","lastTransitionTime":"2026-01-25T00:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.915646 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.915693 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.915702 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.915718 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.915729 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:44Z","lastTransitionTime":"2026-01-25T00:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.020266 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.020328 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.020341 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.020364 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.020381 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:45Z","lastTransitionTime":"2026-01-25T00:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.036481 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 17:13:04.410255196 +0000 UTC Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.122701 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.122748 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.122756 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.122774 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.122787 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:45Z","lastTransitionTime":"2026-01-25T00:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.225674 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.225745 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.225768 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.225795 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.225815 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:45Z","lastTransitionTime":"2026-01-25T00:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.309821 4947 generic.go:334] "Generic (PLEG): container finished" podID="b89f0c74-3c8d-4e3f-8065-9e25a6749dcb" containerID="0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514" exitCode=0 Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.310043 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" event={"ID":"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb","Type":"ContainerDied","Data":"0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514"} Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.325411 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-hf8gg"] Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.326203 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-hf8gg" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.328429 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.328520 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.328582 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.328640 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.328883 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:45Z","lastTransitionTime":"2026-01-25T00:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.329353 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.330935 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.331159 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.332558 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.347714 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.364772 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.385627 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.410715 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.424412 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.433698 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.433746 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.433764 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.433786 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.433802 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:45Z","lastTransitionTime":"2026-01-25T00:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.441883 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.465475 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4f901695-ec8a-4fe2-ba5e-43e346b32ac3-serviceca\") pod \"node-ca-hf8gg\" (UID: \"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\") " pod="openshift-image-registry/node-ca-hf8gg" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.465561 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f901695-ec8a-4fe2-ba5e-43e346b32ac3-host\") pod \"node-ca-hf8gg\" (UID: \"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\") " pod="openshift-image-registry/node-ca-hf8gg" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.465609 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzgv2\" (UniqueName: \"kubernetes.io/projected/4f901695-ec8a-4fe2-ba5e-43e346b32ac3-kube-api-access-xzgv2\") pod \"node-ca-hf8gg\" (UID: \"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\") " pod="openshift-image-registry/node-ca-hf8gg" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.467586 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.487756 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.503322 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.517269 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.538015 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.538273 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.538391 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.538536 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.538656 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:45Z","lastTransitionTime":"2026-01-25T00:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.543692 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.558773 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.567104 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4f901695-ec8a-4fe2-ba5e-43e346b32ac3-serviceca\") pod \"node-ca-hf8gg\" (UID: \"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\") " pod="openshift-image-registry/node-ca-hf8gg" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.567164 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f901695-ec8a-4fe2-ba5e-43e346b32ac3-host\") pod \"node-ca-hf8gg\" (UID: \"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\") " pod="openshift-image-registry/node-ca-hf8gg" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.567195 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzgv2\" (UniqueName: \"kubernetes.io/projected/4f901695-ec8a-4fe2-ba5e-43e346b32ac3-kube-api-access-xzgv2\") pod \"node-ca-hf8gg\" (UID: \"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\") " pod="openshift-image-registry/node-ca-hf8gg" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.567348 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f901695-ec8a-4fe2-ba5e-43e346b32ac3-host\") pod \"node-ca-hf8gg\" (UID: \"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\") " pod="openshift-image-registry/node-ca-hf8gg" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.568681 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4f901695-ec8a-4fe2-ba5e-43e346b32ac3-serviceca\") pod \"node-ca-hf8gg\" (UID: \"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\") " pod="openshift-image-registry/node-ca-hf8gg" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.573858 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.588654 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.590509 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzgv2\" (UniqueName: \"kubernetes.io/projected/4f901695-ec8a-4fe2-ba5e-43e346b32ac3-kube-api-access-xzgv2\") pod \"node-ca-hf8gg\" (UID: \"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\") " pod="openshift-image-registry/node-ca-hf8gg" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.600166 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.612663 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.626629 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.642190 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.642231 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.642241 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.642257 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.642268 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:45Z","lastTransitionTime":"2026-01-25T00:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.643056 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.657175 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.660775 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-hf8gg" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.678000 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.697832 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.727911 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.750388 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.752317 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.752362 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.752381 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.752406 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.752433 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:45Z","lastTransitionTime":"2026-01-25T00:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.762499 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.777566 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.795630 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.808846 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.854954 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.855009 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.855025 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.855045 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.855058 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:45Z","lastTransitionTime":"2026-01-25T00:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.958142 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.958185 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.958196 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.958216 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.958228 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:45Z","lastTransitionTime":"2026-01-25T00:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.037555 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 00:10:24.087491363 +0000 UTC Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.060722 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.060771 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.060784 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.060808 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.060820 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:46Z","lastTransitionTime":"2026-01-25T00:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.089255 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:09:46 crc kubenswrapper[4947]: E0125 00:09:46.089386 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.089759 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:09:46 crc kubenswrapper[4947]: E0125 00:09:46.089814 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.089852 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:46 crc kubenswrapper[4947]: E0125 00:09:46.089899 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.162864 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.162902 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.162915 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.162934 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.162948 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:46Z","lastTransitionTime":"2026-01-25T00:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.266047 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.266680 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.266749 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.266821 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.266898 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:46Z","lastTransitionTime":"2026-01-25T00:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.316424 4947 generic.go:334] "Generic (PLEG): container finished" podID="b89f0c74-3c8d-4e3f-8065-9e25a6749dcb" containerID="12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32" exitCode=0 Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.316501 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" event={"ID":"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb","Type":"ContainerDied","Data":"12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32"} Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.318319 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-hf8gg" event={"ID":"4f901695-ec8a-4fe2-ba5e-43e346b32ac3","Type":"ContainerStarted","Data":"074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a"} Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.318457 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-hf8gg" event={"ID":"4f901695-ec8a-4fe2-ba5e-43e346b32ac3","Type":"ContainerStarted","Data":"eba71edf90e49316756e619d350602e2d939a9dc56b33fa0901779b9a6729afb"} Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.322998 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerStarted","Data":"f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3"} Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.336624 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.351457 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.370149 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.371374 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.371408 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.371418 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.371437 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.371449 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:46Z","lastTransitionTime":"2026-01-25T00:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.399370 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.415882 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.432776 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.453254 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.469703 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.474437 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.474490 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.474500 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.474515 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.474560 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:46Z","lastTransitionTime":"2026-01-25T00:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.482011 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.497436 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.511239 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.523985 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.537510 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.558855 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.576322 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.577781 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.577814 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.577826 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.577846 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.577860 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:46Z","lastTransitionTime":"2026-01-25T00:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.587646 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.607288 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.627605 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.643094 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.656596 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.668455 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.680662 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.680727 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.680745 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.680775 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.680792 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:46Z","lastTransitionTime":"2026-01-25T00:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.683877 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.699885 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.712372 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.724253 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.743680 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.761407 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.772798 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.783997 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.784073 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.784087 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.784106 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.784518 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:46Z","lastTransitionTime":"2026-01-25T00:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.887448 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.887511 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.887526 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.887550 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.887566 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:46Z","lastTransitionTime":"2026-01-25T00:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.990578 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.990647 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.990666 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.990695 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.990716 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:46Z","lastTransitionTime":"2026-01-25T00:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.038304 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 01:31:49.556574073 +0000 UTC Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.094066 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.094154 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.094173 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.094197 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.094216 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:47Z","lastTransitionTime":"2026-01-25T00:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.196937 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.197004 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.197028 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.197055 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.197075 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:47Z","lastTransitionTime":"2026-01-25T00:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.299672 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.299708 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.299720 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.299737 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.299747 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:47Z","lastTransitionTime":"2026-01-25T00:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.338456 4947 generic.go:334] "Generic (PLEG): container finished" podID="b89f0c74-3c8d-4e3f-8065-9e25a6749dcb" containerID="4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377" exitCode=0 Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.338949 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" event={"ID":"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb","Type":"ContainerDied","Data":"4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377"} Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.365054 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:47Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.389496 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:47Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.403493 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.403553 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.403564 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.403584 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.403597 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:47Z","lastTransitionTime":"2026-01-25T00:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.405588 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:47Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.428674 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:47Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.445301 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:47Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.462945 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:47Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.475698 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:47Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.489418 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:47Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.506120 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.506188 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.506198 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.506215 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.506226 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:47Z","lastTransitionTime":"2026-01-25T00:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.507466 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:47Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.522432 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:47Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.535638 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:47Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.558112 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:47Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.572814 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:47Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.585431 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:47Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.608951 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.609016 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.609028 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.609047 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.609060 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:47Z","lastTransitionTime":"2026-01-25T00:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.711797 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.711860 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.711877 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.711908 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.711927 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:47Z","lastTransitionTime":"2026-01-25T00:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.794336 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.794497 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.794552 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.794587 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:47 crc kubenswrapper[4947]: E0125 00:09:47.794664 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:09:55.79460605 +0000 UTC m=+35.027596530 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:09:47 crc kubenswrapper[4947]: E0125 00:09:47.794766 4947 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 00:09:47 crc kubenswrapper[4947]: E0125 00:09:47.794820 4947 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 00:09:47 crc kubenswrapper[4947]: E0125 00:09:47.794861 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:55.794833805 +0000 UTC m=+35.027824285 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 00:09:47 crc kubenswrapper[4947]: E0125 00:09:47.794860 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 00:09:47 crc kubenswrapper[4947]: E0125 00:09:47.794925 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.794765 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:09:47 crc kubenswrapper[4947]: E0125 00:09:47.794951 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:55.794917197 +0000 UTC m=+35.027907667 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 00:09:47 crc kubenswrapper[4947]: E0125 00:09:47.794968 4947 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:47 crc kubenswrapper[4947]: E0125 00:09:47.794996 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 00:09:47 crc kubenswrapper[4947]: E0125 00:09:47.795184 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:55.795158953 +0000 UTC m=+35.028149433 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:47 crc kubenswrapper[4947]: E0125 00:09:47.795215 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 00:09:47 crc kubenswrapper[4947]: E0125 00:09:47.795242 4947 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:47 crc kubenswrapper[4947]: E0125 00:09:47.795348 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:55.795314906 +0000 UTC m=+35.028305386 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.816913 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.816979 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.816996 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.817020 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.817037 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:47Z","lastTransitionTime":"2026-01-25T00:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.920813 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.920867 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.920880 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.920900 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.920914 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:47Z","lastTransitionTime":"2026-01-25T00:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.023589 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.023644 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.023660 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.023681 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.023696 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:48Z","lastTransitionTime":"2026-01-25T00:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.039030 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 00:32:06.371466463 +0000 UTC Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.089582 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:09:48 crc kubenswrapper[4947]: E0125 00:09:48.089795 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.090410 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.090487 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:09:48 crc kubenswrapper[4947]: E0125 00:09:48.090714 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:09:48 crc kubenswrapper[4947]: E0125 00:09:48.090914 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.127357 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.127422 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.127443 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.127479 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.127498 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:48Z","lastTransitionTime":"2026-01-25T00:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.231004 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.231101 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.231164 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.231204 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.231236 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:48Z","lastTransitionTime":"2026-01-25T00:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.334880 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.334932 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.334945 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.334965 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.335007 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:48Z","lastTransitionTime":"2026-01-25T00:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.345390 4947 generic.go:334] "Generic (PLEG): container finished" podID="b89f0c74-3c8d-4e3f-8065-9e25a6749dcb" containerID="be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581" exitCode=0 Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.345459 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" event={"ID":"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb","Type":"ContainerDied","Data":"be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581"} Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.368052 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:48Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.395442 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:48Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.411194 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:48Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.433846 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:48Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.438545 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.438581 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.438589 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.438605 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.438615 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:48Z","lastTransitionTime":"2026-01-25T00:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.447736 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:48Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.463952 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:48Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.477438 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:48Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.489962 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:48Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.506299 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:48Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.522177 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:48Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.536182 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:48Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.540847 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.540879 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.540888 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.540904 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.540913 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:48Z","lastTransitionTime":"2026-01-25T00:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.551459 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:48Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.567326 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:48Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.582704 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:48Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.643493 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.643529 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.643539 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.643555 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.643564 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:48Z","lastTransitionTime":"2026-01-25T00:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.746822 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.746859 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.746868 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.746887 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.746898 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:48Z","lastTransitionTime":"2026-01-25T00:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.851305 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.851368 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.851386 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.851409 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.851429 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:48Z","lastTransitionTime":"2026-01-25T00:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.954711 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.955207 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.955228 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.955257 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.955275 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:48Z","lastTransitionTime":"2026-01-25T00:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.039244 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 08:06:43.623050825 +0000 UTC Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.059108 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.059172 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.059185 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.059203 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.059214 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:49Z","lastTransitionTime":"2026-01-25T00:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.162311 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.162384 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.162398 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.162422 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.162442 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:49Z","lastTransitionTime":"2026-01-25T00:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.266176 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.266275 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.266302 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.266343 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.266368 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:49Z","lastTransitionTime":"2026-01-25T00:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.356022 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" event={"ID":"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb","Type":"ContainerStarted","Data":"0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee"} Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.362241 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerStarted","Data":"6ccc6bd4276b9c0ae810c5d3145b10e523372c4f2d00e1c63c4757025e8b9947"} Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.362842 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.362882 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.369349 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.369423 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.369452 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.369480 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.369498 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:49Z","lastTransitionTime":"2026-01-25T00:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.383994 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.396576 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.406967 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.428113 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.447887 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.464002 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.472195 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.472249 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.472268 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.472294 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.472314 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:49Z","lastTransitionTime":"2026-01-25T00:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.478693 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.503987 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.521803 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.543554 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.563297 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.585523 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.599179 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.599238 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.599258 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.599287 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.599313 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:49Z","lastTransitionTime":"2026-01-25T00:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.611335 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.627645 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.644022 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.659276 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.683247 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.703361 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.703414 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.703437 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.703464 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.703482 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:49Z","lastTransitionTime":"2026-01-25T00:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.708121 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.724914 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.768043 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.789780 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.806652 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.806722 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.806739 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.806767 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.806792 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:49Z","lastTransitionTime":"2026-01-25T00:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.809686 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.827936 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.843948 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.859673 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.881365 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.900271 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.910336 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.910390 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.910408 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.910434 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.910451 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:49Z","lastTransitionTime":"2026-01-25T00:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.917224 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.943677 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc6bd4276b9c0ae810c5d3145b10e523372c4f2d00e1c63c4757025e8b9947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.013176 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.013212 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.013225 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.013246 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.013258 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:50Z","lastTransitionTime":"2026-01-25T00:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.040242 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 01:28:43.541106747 +0000 UTC Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.089001 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.089074 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:50 crc kubenswrapper[4947]: E0125 00:09:50.089265 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:09:50 crc kubenswrapper[4947]: E0125 00:09:50.089385 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.089539 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:09:50 crc kubenswrapper[4947]: E0125 00:09:50.089859 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.116191 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.116460 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.116651 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.116995 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.117182 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:50Z","lastTransitionTime":"2026-01-25T00:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.220506 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.220802 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.220873 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.220937 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.221002 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:50Z","lastTransitionTime":"2026-01-25T00:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.324031 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.324295 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.324383 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.324493 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.324621 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:50Z","lastTransitionTime":"2026-01-25T00:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.369401 4947 generic.go:334] "Generic (PLEG): container finished" podID="b89f0c74-3c8d-4e3f-8065-9e25a6749dcb" containerID="0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee" exitCode=0 Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.369831 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" event={"ID":"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb","Type":"ContainerDied","Data":"0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee"} Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.370243 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.386960 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.406662 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.427866 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.427907 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.427917 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.427934 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.427946 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:50Z","lastTransitionTime":"2026-01-25T00:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.428987 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.445627 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.450479 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.469007 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc6bd4276b9c0ae810c5d3145b10e523372c4f2d00e1c63c4757025e8b9947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.482301 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.501358 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.524584 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.531019 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.531075 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.531096 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.531152 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.531177 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:50Z","lastTransitionTime":"2026-01-25T00:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.543469 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.561789 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.588304 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.612327 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.629483 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.633925 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.633987 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.634001 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.634021 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.634034 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:50Z","lastTransitionTime":"2026-01-25T00:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.643417 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.657582 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.671701 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.686083 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.698722 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.712679 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.732220 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.737056 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.737112 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.737144 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.737165 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.737180 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:50Z","lastTransitionTime":"2026-01-25T00:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.746851 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.760001 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.772776 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.784849 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.802427 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.821567 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.833774 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.839790 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.839944 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.840038 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.840159 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.840319 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:50Z","lastTransitionTime":"2026-01-25T00:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.856699 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc6bd4276b9c0ae810c5d3145b10e523372c4f2d00e1c63c4757025e8b9947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.924959 4947 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.945861 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.945910 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.945922 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.945951 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.945964 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:50Z","lastTransitionTime":"2026-01-25T00:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.041007 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 08:37:23.289736808 +0000 UTC Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.050345 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.050422 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.050443 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.050473 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.050492 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:51Z","lastTransitionTime":"2026-01-25T00:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.105834 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.127310 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.152036 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.154719 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.154778 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.154800 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.154827 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.154847 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:51Z","lastTransitionTime":"2026-01-25T00:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.176678 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.193793 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.208328 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.219666 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.234428 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.257617 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.257663 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.257859 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.257882 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.257896 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:51Z","lastTransitionTime":"2026-01-25T00:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.260286 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.271907 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.301532 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc6bd4276b9c0ae810c5d3145b10e523372c4f2d00e1c63c4757025e8b9947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.322514 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.343252 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.357325 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.362170 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.362246 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.362266 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.362296 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.362317 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:51Z","lastTransitionTime":"2026-01-25T00:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.379070 4947 generic.go:334] "Generic (PLEG): container finished" podID="b89f0c74-3c8d-4e3f-8065-9e25a6749dcb" containerID="2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273" exitCode=0 Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.379146 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" event={"ID":"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb","Type":"ContainerDied","Data":"2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273"} Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.403025 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.427855 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.458418 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.472709 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.482166 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.482186 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.482215 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.482230 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:51Z","lastTransitionTime":"2026-01-25T00:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.499833 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.519994 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.535928 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.563290 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc6bd4276b9c0ae810c5d3145b10e523372c4f2d00e1c63c4757025e8b9947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.584302 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.584465 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.584522 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.584540 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.584569 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.584591 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:51Z","lastTransitionTime":"2026-01-25T00:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.601490 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.618918 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.645633 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.669341 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.687245 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.687289 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.687301 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.687322 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.687335 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:51Z","lastTransitionTime":"2026-01-25T00:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.688477 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.703365 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.791470 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.791546 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.791564 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.791591 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.791612 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:51Z","lastTransitionTime":"2026-01-25T00:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.894521 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.894563 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.894578 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.894599 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.894617 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:51Z","lastTransitionTime":"2026-01-25T00:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.997911 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.997990 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.998008 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.998041 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.998064 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:51Z","lastTransitionTime":"2026-01-25T00:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.041255 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 21:34:29.797059473 +0000 UTC Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.089553 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.089621 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.089708 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:52 crc kubenswrapper[4947]: E0125 00:09:52.089791 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:09:52 crc kubenswrapper[4947]: E0125 00:09:52.089926 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:09:52 crc kubenswrapper[4947]: E0125 00:09:52.090063 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.103289 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.103352 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.103438 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.103463 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.103495 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:52Z","lastTransitionTime":"2026-01-25T00:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.192314 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.206499 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.206552 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.206564 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.206584 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.206598 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:52Z","lastTransitionTime":"2026-01-25T00:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.211805 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.232420 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.247237 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.263925 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.285481 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.300823 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.309198 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.309248 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.309261 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.309282 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.309299 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:52Z","lastTransitionTime":"2026-01-25T00:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.324063 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc6bd4276b9c0ae810c5d3145b10e523372c4f2d00e1c63c4757025e8b9947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.340252 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.358467 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.375829 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.389199 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovnkube-controller/0.log" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.392642 4947 generic.go:334] "Generic (PLEG): container finished" podID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerID="6ccc6bd4276b9c0ae810c5d3145b10e523372c4f2d00e1c63c4757025e8b9947" exitCode=1 Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.392724 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerDied","Data":"6ccc6bd4276b9c0ae810c5d3145b10e523372c4f2d00e1c63c4757025e8b9947"} Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.396688 4947 scope.go:117] "RemoveContainer" containerID="6ccc6bd4276b9c0ae810c5d3145b10e523372c4f2d00e1c63c4757025e8b9947" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.400191 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.402322 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" event={"ID":"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb","Type":"ContainerStarted","Data":"aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a"} Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.411668 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.411742 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.411771 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.411806 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.411831 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:52Z","lastTransitionTime":"2026-01-25T00:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.418367 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.433728 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.451924 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.463936 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.477208 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.489971 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.505008 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.516484 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.516521 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.516532 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.516550 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.516560 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:52Z","lastTransitionTime":"2026-01-25T00:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.519012 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.536699 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.548580 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.560914 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.572381 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.583626 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.602063 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.619570 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.619600 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.619613 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.619632 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.619645 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:52Z","lastTransitionTime":"2026-01-25T00:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.621639 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.634211 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.660627 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc6bd4276b9c0ae810c5d3145b10e523372c4f2d00e1c63c4757025e8b9947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccc6bd4276b9c0ae810c5d3145b10e523372c4f2d00e1c63c4757025e8b9947\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0125 00:09:51.640841 6212 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0125 00:09:51.640872 6212 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0125 00:09:51.640905 6212 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0125 00:09:51.640937 6212 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0125 00:09:51.640906 6212 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0125 00:09:51.640988 6212 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0125 00:09:51.641027 6212 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0125 00:09:51.641034 6212 handler.go:208] Removed *v1.Node event handler 2\\\\nI0125 00:09:51.641085 6212 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0125 00:09:51.641099 6212 handler.go:208] Removed *v1.Node event handler 7\\\\nI0125 00:09:51.641163 6212 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0125 00:09:51.641242 6212 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0125 00:09:51.641275 6212 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0125 00:09:51.641347 6212 factory.go:656] Stopping watch factory\\\\nI0125 00:09:51.641363 6212 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.722970 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.723024 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.723043 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.723070 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.723087 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:52Z","lastTransitionTime":"2026-01-25T00:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.825910 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.825983 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.826002 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.826030 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.826049 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:52Z","lastTransitionTime":"2026-01-25T00:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.929366 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.929715 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.929729 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.929748 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.929761 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:52Z","lastTransitionTime":"2026-01-25T00:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.033013 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.033060 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.033072 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.033091 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.033103 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:53Z","lastTransitionTime":"2026-01-25T00:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.042550 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 15:08:17.289131796 +0000 UTC Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.135923 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.135988 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.136005 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.136029 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.136045 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:53Z","lastTransitionTime":"2026-01-25T00:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.239819 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.239871 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.239896 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.239916 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.239928 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:53Z","lastTransitionTime":"2026-01-25T00:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.342198 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.342253 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.342270 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.342289 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.342302 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:53Z","lastTransitionTime":"2026-01-25T00:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.413594 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovnkube-controller/0.log" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.418932 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerStarted","Data":"911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e"} Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.435816 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.444884 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.444937 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.444951 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.444972 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.444986 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:53Z","lastTransitionTime":"2026-01-25T00:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.450645 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.460073 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.476340 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.491960 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.506666 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.521605 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.538974 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.548485 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.548547 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.548566 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.548595 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.548616 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:53Z","lastTransitionTime":"2026-01-25T00:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.556498 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.572122 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.592541 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.614496 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.632333 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.652760 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.652853 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.652885 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.652936 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.652965 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:53Z","lastTransitionTime":"2026-01-25T00:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.663859 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccc6bd4276b9c0ae810c5d3145b10e523372c4f2d00e1c63c4757025e8b9947\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0125 00:09:51.640841 6212 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0125 00:09:51.640872 6212 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0125 00:09:51.640905 6212 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0125 00:09:51.640937 6212 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0125 00:09:51.640906 6212 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0125 00:09:51.640988 6212 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0125 00:09:51.641027 6212 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0125 00:09:51.641034 6212 handler.go:208] Removed *v1.Node event handler 2\\\\nI0125 00:09:51.641085 6212 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0125 00:09:51.641099 6212 handler.go:208] Removed *v1.Node event handler 7\\\\nI0125 00:09:51.641163 6212 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0125 00:09:51.641242 6212 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0125 00:09:51.641275 6212 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0125 00:09:51.641347 6212 factory.go:656] Stopping watch factory\\\\nI0125 00:09:51.641363 6212 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.756027 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.756071 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.756082 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.756098 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.756107 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:53Z","lastTransitionTime":"2026-01-25T00:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.858803 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.858863 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.858881 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.858906 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.858926 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:53Z","lastTransitionTime":"2026-01-25T00:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.961491 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.961593 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.961616 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.961644 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.961665 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:53Z","lastTransitionTime":"2026-01-25T00:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.043597 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 20:39:10.274130648 +0000 UTC Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.063984 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.064194 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.064262 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.064321 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.064376 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:54Z","lastTransitionTime":"2026-01-25T00:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.089073 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.089238 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.089077 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:09:54 crc kubenswrapper[4947]: E0125 00:09:54.089380 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:09:54 crc kubenswrapper[4947]: E0125 00:09:54.089515 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:09:54 crc kubenswrapper[4947]: E0125 00:09:54.089626 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.145996 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.146067 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.146085 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.146113 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.146161 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:54Z","lastTransitionTime":"2026-01-25T00:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:54 crc kubenswrapper[4947]: E0125 00:09:54.167693 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:54Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.174601 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.174674 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.174700 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.174725 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.174742 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:54Z","lastTransitionTime":"2026-01-25T00:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:54 crc kubenswrapper[4947]: E0125 00:09:54.194585 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:54Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.199832 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.199891 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.199913 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.199939 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.199958 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:54Z","lastTransitionTime":"2026-01-25T00:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:54 crc kubenswrapper[4947]: E0125 00:09:54.219325 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:54Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.250084 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.250180 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.250205 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.250236 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.250260 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:54Z","lastTransitionTime":"2026-01-25T00:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:54 crc kubenswrapper[4947]: E0125 00:09:54.272649 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:54Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.279294 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.279352 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.279370 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.279393 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.279407 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:54Z","lastTransitionTime":"2026-01-25T00:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:54 crc kubenswrapper[4947]: E0125 00:09:54.304952 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:54Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:54 crc kubenswrapper[4947]: E0125 00:09:54.305069 4947 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.306880 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.306918 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.306927 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.306945 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.306958 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:54Z","lastTransitionTime":"2026-01-25T00:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.409966 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.410050 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.410074 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.410107 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.410203 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:54Z","lastTransitionTime":"2026-01-25T00:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.424100 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovnkube-controller/1.log" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.425429 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovnkube-controller/0.log" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.429311 4947 generic.go:334] "Generic (PLEG): container finished" podID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerID="911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e" exitCode=1 Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.429362 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerDied","Data":"911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e"} Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.429437 4947 scope.go:117] "RemoveContainer" containerID="6ccc6bd4276b9c0ae810c5d3145b10e523372c4f2d00e1c63c4757025e8b9947" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.430890 4947 scope.go:117] "RemoveContainer" containerID="911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e" Jan 25 00:09:54 crc kubenswrapper[4947]: E0125 00:09:54.431259 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.450859 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:54Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.470607 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:54Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.487477 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:54Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.508529 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccc6bd4276b9c0ae810c5d3145b10e523372c4f2d00e1c63c4757025e8b9947\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0125 00:09:51.640841 6212 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0125 00:09:51.640872 6212 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0125 00:09:51.640905 6212 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0125 00:09:51.640937 6212 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0125 00:09:51.640906 6212 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0125 00:09:51.640988 6212 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0125 00:09:51.641027 6212 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0125 00:09:51.641034 6212 handler.go:208] Removed *v1.Node event handler 2\\\\nI0125 00:09:51.641085 6212 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0125 00:09:51.641099 6212 handler.go:208] Removed *v1.Node event handler 7\\\\nI0125 00:09:51.641163 6212 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0125 00:09:51.641242 6212 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0125 00:09:51.641275 6212 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0125 00:09:51.641347 6212 factory.go:656] Stopping watch factory\\\\nI0125 00:09:51.641363 6212 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:09:53Z\\\",\\\"message\\\":\\\"]} name:Service_openshift-ingress/router-internal-default_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0125 00:09:53.330571 6392 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z]\\\\nI0125 00:09:53.330579 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:54Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.514507 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.514557 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.514578 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.514604 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.514622 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:54Z","lastTransitionTime":"2026-01-25T00:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.524049 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:54Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.539271 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:54Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.557347 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:54Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.580003 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:54Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.600466 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:54Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.618834 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.618913 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.618930 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.618963 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.618982 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:54Z","lastTransitionTime":"2026-01-25T00:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.627206 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:54Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.653017 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:54Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.672065 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:54Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.689951 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:54Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.706939 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:54Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.722962 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.723031 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.723057 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.723096 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.723172 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:54Z","lastTransitionTime":"2026-01-25T00:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.826587 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.826655 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.826688 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.826719 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.826740 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:54Z","lastTransitionTime":"2026-01-25T00:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.929646 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.929693 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.929705 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.929755 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.929767 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:54Z","lastTransitionTime":"2026-01-25T00:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.032917 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.033016 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.033048 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.033083 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.033107 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:55Z","lastTransitionTime":"2026-01-25T00:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.045424 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 17:52:42.773281739 +0000 UTC Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.137042 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.137189 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.137219 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.137255 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.137278 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:55Z","lastTransitionTime":"2026-01-25T00:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.192671 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm"] Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.193312 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.198361 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.200865 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.213329 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.226352 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.240295 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.241852 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.241907 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.241932 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.241964 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.241992 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:55Z","lastTransitionTime":"2026-01-25T00:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.264252 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccc6bd4276b9c0ae810c5d3145b10e523372c4f2d00e1c63c4757025e8b9947\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0125 00:09:51.640841 6212 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0125 00:09:51.640872 6212 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0125 00:09:51.640905 6212 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0125 00:09:51.640937 6212 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0125 00:09:51.640906 6212 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0125 00:09:51.640988 6212 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0125 00:09:51.641027 6212 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0125 00:09:51.641034 6212 handler.go:208] Removed *v1.Node event handler 2\\\\nI0125 00:09:51.641085 6212 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0125 00:09:51.641099 6212 handler.go:208] Removed *v1.Node event handler 7\\\\nI0125 00:09:51.641163 6212 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0125 00:09:51.641242 6212 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0125 00:09:51.641275 6212 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0125 00:09:51.641347 6212 factory.go:656] Stopping watch factory\\\\nI0125 00:09:51.641363 6212 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:09:53Z\\\",\\\"message\\\":\\\"]} name:Service_openshift-ingress/router-internal-default_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0125 00:09:53.330571 6392 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z]\\\\nI0125 00:09:53.330579 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.280760 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.291439 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ba95f90e-9162-425c-9ac3-d655ea43cfa0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nxpzm\" (UID: \"ba95f90e-9162-425c-9ac3-d655ea43cfa0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.291486 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggxrj\" (UniqueName: \"kubernetes.io/projected/ba95f90e-9162-425c-9ac3-d655ea43cfa0-kube-api-access-ggxrj\") pod \"ovnkube-control-plane-749d76644c-nxpzm\" (UID: \"ba95f90e-9162-425c-9ac3-d655ea43cfa0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.291532 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ba95f90e-9162-425c-9ac3-d655ea43cfa0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nxpzm\" (UID: \"ba95f90e-9162-425c-9ac3-d655ea43cfa0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.291554 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ba95f90e-9162-425c-9ac3-d655ea43cfa0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nxpzm\" (UID: \"ba95f90e-9162-425c-9ac3-d655ea43cfa0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.294803 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.307877 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.319259 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.337639 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.344464 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.344494 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.344504 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.344521 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.344532 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:55Z","lastTransitionTime":"2026-01-25T00:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.356627 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.369262 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.388272 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.392185 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ba95f90e-9162-425c-9ac3-d655ea43cfa0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nxpzm\" (UID: \"ba95f90e-9162-425c-9ac3-d655ea43cfa0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.392266 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ba95f90e-9162-425c-9ac3-d655ea43cfa0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nxpzm\" (UID: \"ba95f90e-9162-425c-9ac3-d655ea43cfa0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.392307 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ba95f90e-9162-425c-9ac3-d655ea43cfa0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nxpzm\" (UID: \"ba95f90e-9162-425c-9ac3-d655ea43cfa0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.392362 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggxrj\" (UniqueName: \"kubernetes.io/projected/ba95f90e-9162-425c-9ac3-d655ea43cfa0-kube-api-access-ggxrj\") pod \"ovnkube-control-plane-749d76644c-nxpzm\" (UID: \"ba95f90e-9162-425c-9ac3-d655ea43cfa0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.392993 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ba95f90e-9162-425c-9ac3-d655ea43cfa0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nxpzm\" (UID: \"ba95f90e-9162-425c-9ac3-d655ea43cfa0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.393140 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ba95f90e-9162-425c-9ac3-d655ea43cfa0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nxpzm\" (UID: \"ba95f90e-9162-425c-9ac3-d655ea43cfa0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.400998 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ba95f90e-9162-425c-9ac3-d655ea43cfa0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nxpzm\" (UID: \"ba95f90e-9162-425c-9ac3-d655ea43cfa0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.403604 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.418675 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba95f90e-9162-425c-9ac3-d655ea43cfa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxpzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.422028 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggxrj\" (UniqueName: \"kubernetes.io/projected/ba95f90e-9162-425c-9ac3-d655ea43cfa0-kube-api-access-ggxrj\") pod \"ovnkube-control-plane-749d76644c-nxpzm\" (UID: \"ba95f90e-9162-425c-9ac3-d655ea43cfa0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.434115 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovnkube-controller/1.log" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.434866 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.438252 4947 scope.go:117] "RemoveContainer" containerID="911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e" Jan 25 00:09:55 crc kubenswrapper[4947]: E0125 00:09:55.438560 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.446800 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.446844 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.446860 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.446882 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.446903 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:55Z","lastTransitionTime":"2026-01-25T00:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.449161 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.461805 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.489324 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:09:53Z\\\",\\\"message\\\":\\\"]} name:Service_openshift-ingress/router-internal-default_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0125 00:09:53.330571 6392 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z]\\\\nI0125 00:09:53.330579 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.505167 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.515360 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.523221 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.541960 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.549770 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.549836 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.549851 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.549877 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.549895 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:55Z","lastTransitionTime":"2026-01-25T00:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.561777 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.585803 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.600529 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.610386 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.622376 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.640530 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.652804 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.652838 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.652849 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.652864 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.652877 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:55Z","lastTransitionTime":"2026-01-25T00:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.654776 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.668798 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.684407 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba95f90e-9162-425c-9ac3-d655ea43cfa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxpzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.756243 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.756290 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.756308 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.756329 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.756344 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:55Z","lastTransitionTime":"2026-01-25T00:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.796605 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.796700 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.796736 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.796761 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.796788 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:09:55 crc kubenswrapper[4947]: E0125 00:09:55.796869 4947 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 00:09:55 crc kubenswrapper[4947]: E0125 00:09:55.796957 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 00:10:11.796931291 +0000 UTC m=+51.029921731 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 00:09:55 crc kubenswrapper[4947]: E0125 00:09:55.797000 4947 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 00:09:55 crc kubenswrapper[4947]: E0125 00:09:55.797031 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 00:09:55 crc kubenswrapper[4947]: E0125 00:09:55.797039 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 00:09:55 crc kubenswrapper[4947]: E0125 00:09:55.797050 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 00:09:55 crc kubenswrapper[4947]: E0125 00:09:55.797067 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 00:09:55 crc kubenswrapper[4947]: E0125 00:09:55.797086 4947 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:55 crc kubenswrapper[4947]: E0125 00:09:55.797071 4947 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:55 crc kubenswrapper[4947]: E0125 00:09:55.797045 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 00:10:11.797033574 +0000 UTC m=+51.030024014 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 00:09:55 crc kubenswrapper[4947]: E0125 00:09:55.797228 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-25 00:10:11.797211118 +0000 UTC m=+51.030201568 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:55 crc kubenswrapper[4947]: E0125 00:09:55.797244 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:10:11.797236079 +0000 UTC m=+51.030226529 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:09:55 crc kubenswrapper[4947]: E0125 00:09:55.797258 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-25 00:10:11.797251829 +0000 UTC m=+51.030242289 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.861116 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.861216 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.861235 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.861265 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.861284 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:55Z","lastTransitionTime":"2026-01-25T00:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.965517 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.965585 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.965605 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.965634 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.965655 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:55Z","lastTransitionTime":"2026-01-25T00:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.046175 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 16:25:33.176400424 +0000 UTC Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.069047 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.069149 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.069168 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.069193 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.069211 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:56Z","lastTransitionTime":"2026-01-25T00:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.089262 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.089304 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.089262 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:09:56 crc kubenswrapper[4947]: E0125 00:09:56.089638 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:09:56 crc kubenswrapper[4947]: E0125 00:09:56.089744 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:09:56 crc kubenswrapper[4947]: E0125 00:09:56.090016 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.172177 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.172234 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.172250 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.172273 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.172291 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:56Z","lastTransitionTime":"2026-01-25T00:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.274840 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.274903 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.274926 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.274958 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.274983 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:56Z","lastTransitionTime":"2026-01-25T00:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.378657 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.378737 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.378759 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.378791 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.378812 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:56Z","lastTransitionTime":"2026-01-25T00:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.443846 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" event={"ID":"ba95f90e-9162-425c-9ac3-d655ea43cfa0","Type":"ContainerStarted","Data":"889a90c35956bc31d77a3e867b33cc7ecd8b4e51803d12e8af7e0db44b30b659"} Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.482642 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.482714 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.482741 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.482772 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.482791 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:56Z","lastTransitionTime":"2026-01-25T00:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.587295 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.587378 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.587395 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.587420 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.587440 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:56Z","lastTransitionTime":"2026-01-25T00:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.690854 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.690922 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.690943 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.690970 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.690988 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:56Z","lastTransitionTime":"2026-01-25T00:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.731675 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-hj7kb"] Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.732759 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:09:56 crc kubenswrapper[4947]: E0125 00:09:56.733082 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.752576 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:56Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.772991 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:56Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.793761 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:56Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.794764 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.794824 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.794845 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.794873 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.794894 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:56Z","lastTransitionTime":"2026-01-25T00:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.807791 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs\") pod \"network-metrics-daemon-hj7kb\" (UID: \"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\") " pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.807874 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxbvt\" (UniqueName: \"kubernetes.io/projected/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-kube-api-access-qxbvt\") pod \"network-metrics-daemon-hj7kb\" (UID: \"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\") " pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.809766 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:56Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.840539 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:09:53Z\\\",\\\"message\\\":\\\"]} name:Service_openshift-ingress/router-internal-default_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0125 00:09:53.330571 6392 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z]\\\\nI0125 00:09:53.330579 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:56Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.859056 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hj7kb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hj7kb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:56Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.879192 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:56Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.898167 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.898250 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.898276 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.898314 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.898337 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:56Z","lastTransitionTime":"2026-01-25T00:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.900989 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:56Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.909001 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs\") pod \"network-metrics-daemon-hj7kb\" (UID: \"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\") " pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.909085 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxbvt\" (UniqueName: \"kubernetes.io/projected/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-kube-api-access-qxbvt\") pod \"network-metrics-daemon-hj7kb\" (UID: \"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\") " pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:09:56 crc kubenswrapper[4947]: E0125 00:09:56.909772 4947 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 00:09:56 crc kubenswrapper[4947]: E0125 00:09:56.909869 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs podName:9a64fbf1-68fc-4379-9bb7-009c4f2cc812 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:57.409843562 +0000 UTC m=+36.642834032 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs") pod "network-metrics-daemon-hj7kb" (UID: "9a64fbf1-68fc-4379-9bb7-009c4f2cc812") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.929784 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:56Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.942460 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxbvt\" (UniqueName: \"kubernetes.io/projected/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-kube-api-access-qxbvt\") pod \"network-metrics-daemon-hj7kb\" (UID: \"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\") " pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.948767 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:56Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.976635 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:56Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.001398 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:56Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.002348 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.002408 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.002426 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.002451 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.002469 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:57Z","lastTransitionTime":"2026-01-25T00:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.017973 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:57Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.038558 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:57Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.047081 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 12:01:29.90063808 +0000 UTC Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.057559 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:57Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.076229 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba95f90e-9162-425c-9ac3-d655ea43cfa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxpzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:57Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.106105 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.106194 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.106260 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.106285 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.106303 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:57Z","lastTransitionTime":"2026-01-25T00:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.209465 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.209550 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.209571 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.209599 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.209621 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:57Z","lastTransitionTime":"2026-01-25T00:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.312517 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.312578 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.312596 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.312624 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.312645 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:57Z","lastTransitionTime":"2026-01-25T00:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.414818 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs\") pod \"network-metrics-daemon-hj7kb\" (UID: \"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\") " pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:09:57 crc kubenswrapper[4947]: E0125 00:09:57.415009 4947 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 00:09:57 crc kubenswrapper[4947]: E0125 00:09:57.415095 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs podName:9a64fbf1-68fc-4379-9bb7-009c4f2cc812 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:58.415075455 +0000 UTC m=+37.648065895 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs") pod "network-metrics-daemon-hj7kb" (UID: "9a64fbf1-68fc-4379-9bb7-009c4f2cc812") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.415712 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.415814 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.415836 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.415862 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.415879 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:57Z","lastTransitionTime":"2026-01-25T00:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.521349 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.521390 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.521404 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.521421 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.521433 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:57Z","lastTransitionTime":"2026-01-25T00:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.624492 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.624541 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.624552 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.624570 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.624581 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:57Z","lastTransitionTime":"2026-01-25T00:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.727564 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.727658 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.727681 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.727754 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.727818 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:57Z","lastTransitionTime":"2026-01-25T00:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.831251 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.831338 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.831359 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.831388 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.831407 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:57Z","lastTransitionTime":"2026-01-25T00:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.935886 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.936720 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.936853 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.937181 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.937377 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:57Z","lastTransitionTime":"2026-01-25T00:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.041255 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.041607 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.041755 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.041955 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.042212 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:58Z","lastTransitionTime":"2026-01-25T00:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.047757 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 09:19:35.007536696 +0000 UTC Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.089315 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.089322 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:58 crc kubenswrapper[4947]: E0125 00:09:58.089884 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.089352 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.089355 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:09:58 crc kubenswrapper[4947]: E0125 00:09:58.090281 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:09:58 crc kubenswrapper[4947]: E0125 00:09:58.090527 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:09:58 crc kubenswrapper[4947]: E0125 00:09:58.090795 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.145586 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.145653 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.145672 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.145702 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.145723 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:58Z","lastTransitionTime":"2026-01-25T00:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.249257 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.249380 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.249430 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.249468 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.249503 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:58Z","lastTransitionTime":"2026-01-25T00:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.352656 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.352712 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.352730 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.352754 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.352772 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:58Z","lastTransitionTime":"2026-01-25T00:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.428942 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs\") pod \"network-metrics-daemon-hj7kb\" (UID: \"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\") " pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:09:58 crc kubenswrapper[4947]: E0125 00:09:58.429240 4947 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 00:09:58 crc kubenswrapper[4947]: E0125 00:09:58.429348 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs podName:9a64fbf1-68fc-4379-9bb7-009c4f2cc812 nodeName:}" failed. No retries permitted until 2026-01-25 00:10:00.429320432 +0000 UTC m=+39.662310922 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs") pod "network-metrics-daemon-hj7kb" (UID: "9a64fbf1-68fc-4379-9bb7-009c4f2cc812") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.454820 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" event={"ID":"ba95f90e-9162-425c-9ac3-d655ea43cfa0","Type":"ContainerStarted","Data":"33f21149a21977361a24c0bc54887c8ca18a3d72cd6714bc26228b1d049c181a"} Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.454927 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" event={"ID":"ba95f90e-9162-425c-9ac3-d655ea43cfa0","Type":"ContainerStarted","Data":"6a063931e33e98f7125d995ea29ad219e7143c0fd14a4ad57cba7608afd98b6c"} Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.455988 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.456029 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.456045 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.456065 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.456084 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:58Z","lastTransitionTime":"2026-01-25T00:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.472558 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:58Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.497058 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:58Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.517581 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:58Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.544212 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:09:53Z\\\",\\\"message\\\":\\\"]} name:Service_openshift-ingress/router-internal-default_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0125 00:09:53.330571 6392 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z]\\\\nI0125 00:09:53.330579 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:58Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.559266 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:58Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.560892 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.560940 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.560953 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.560976 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.560994 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:58Z","lastTransitionTime":"2026-01-25T00:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.583859 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:58Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.604197 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:58Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.624749 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:58Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.650006 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:58Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.664604 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.664877 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.665022 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.665209 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.665373 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:58Z","lastTransitionTime":"2026-01-25T00:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.674383 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:58Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.689492 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:58Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.705668 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hj7kb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hj7kb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:58Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.721893 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:58Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.744482 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:58Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.762080 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:58Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.767837 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.767958 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.767981 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.768017 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.768040 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:58Z","lastTransitionTime":"2026-01-25T00:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.781895 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba95f90e-9162-425c-9ac3-d655ea43cfa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a063931e33e98f7125d995ea29ad219e7143c0fd14a4ad57cba7608afd98b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f21149a21977361a24c0bc54887c8ca18a3d72cd6714bc26228b1d049c181a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxpzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:58Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.871844 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.871919 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.871943 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.872081 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.872218 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:58Z","lastTransitionTime":"2026-01-25T00:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.976018 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.976087 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.976109 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.976181 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.976214 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:58Z","lastTransitionTime":"2026-01-25T00:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.048540 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 20:37:32.086168386 +0000 UTC Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.080889 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.080960 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.080982 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.081015 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.081034 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:59Z","lastTransitionTime":"2026-01-25T00:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.184274 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.184347 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.184366 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.184394 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.184413 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:59Z","lastTransitionTime":"2026-01-25T00:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.288309 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.288367 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.288378 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.288400 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.288413 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:59Z","lastTransitionTime":"2026-01-25T00:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.391672 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.391770 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.391795 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.391831 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.391862 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:59Z","lastTransitionTime":"2026-01-25T00:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.495796 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.495899 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.495934 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.495971 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.495996 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:59Z","lastTransitionTime":"2026-01-25T00:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.599832 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.599915 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.599937 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.599968 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.599988 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:59Z","lastTransitionTime":"2026-01-25T00:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.703239 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.703321 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.703342 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.703372 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.703394 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:59Z","lastTransitionTime":"2026-01-25T00:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.807221 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.807275 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.807288 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.807310 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.807324 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:59Z","lastTransitionTime":"2026-01-25T00:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.910580 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.910647 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.910666 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.910692 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.910710 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:59Z","lastTransitionTime":"2026-01-25T00:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.014550 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.014624 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.014642 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.014670 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.014687 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:00Z","lastTransitionTime":"2026-01-25T00:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.049412 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 11:43:10.247858341 +0000 UTC Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.088975 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.089019 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:00 crc kubenswrapper[4947]: E0125 00:10:00.089272 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.089118 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:00 crc kubenswrapper[4947]: E0125 00:10:00.089519 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:00 crc kubenswrapper[4947]: E0125 00:10:00.089889 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.089926 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:00 crc kubenswrapper[4947]: E0125 00:10:00.090119 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.117525 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.117628 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.117650 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.117683 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.117704 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:00Z","lastTransitionTime":"2026-01-25T00:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.221446 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.221528 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.221545 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.221574 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.221592 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:00Z","lastTransitionTime":"2026-01-25T00:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.325416 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.325509 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.325536 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.325576 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.325605 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:00Z","lastTransitionTime":"2026-01-25T00:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.429463 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.429543 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.429566 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.429597 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.429617 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:00Z","lastTransitionTime":"2026-01-25T00:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.455189 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs\") pod \"network-metrics-daemon-hj7kb\" (UID: \"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\") " pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:00 crc kubenswrapper[4947]: E0125 00:10:00.455444 4947 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 00:10:00 crc kubenswrapper[4947]: E0125 00:10:00.455579 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs podName:9a64fbf1-68fc-4379-9bb7-009c4f2cc812 nodeName:}" failed. No retries permitted until 2026-01-25 00:10:04.455543082 +0000 UTC m=+43.688533552 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs") pod "network-metrics-daemon-hj7kb" (UID: "9a64fbf1-68fc-4379-9bb7-009c4f2cc812") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.534541 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.534604 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.534623 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.534651 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.534671 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:00Z","lastTransitionTime":"2026-01-25T00:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.637963 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.638029 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.638048 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.638074 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.638092 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:00Z","lastTransitionTime":"2026-01-25T00:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.742624 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.742713 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.742742 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.742776 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.742805 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:00Z","lastTransitionTime":"2026-01-25T00:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.847624 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.847694 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.847715 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.847741 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.847759 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:00Z","lastTransitionTime":"2026-01-25T00:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.951622 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.951695 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.951716 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.951742 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.951758 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:00Z","lastTransitionTime":"2026-01-25T00:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.050203 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 04:52:09.081771522 +0000 UTC Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.055406 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.055483 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.055513 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.055667 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.055710 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:01Z","lastTransitionTime":"2026-01-25T00:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.111494 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:01Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.132965 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:01Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.151660 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:01Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.159261 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.159366 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.159396 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.159435 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.159463 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:01Z","lastTransitionTime":"2026-01-25T00:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.181719 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:01Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.211018 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:01Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.235049 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:01Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.252592 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:01Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.263812 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.263857 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.263877 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.263904 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.263923 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:01Z","lastTransitionTime":"2026-01-25T00:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.270294 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hj7kb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hj7kb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:01Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.296435 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:01Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.332166 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:01Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.353262 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba95f90e-9162-425c-9ac3-d655ea43cfa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a063931e33e98f7125d995ea29ad219e7143c0fd14a4ad57cba7608afd98b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f21149a21977361a24c0bc54887c8ca18a3d72cd6714bc26228b1d049c181a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxpzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:01Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.367344 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.367396 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.367417 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.367442 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.367460 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:01Z","lastTransitionTime":"2026-01-25T00:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.372228 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:01Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.393350 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:01Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.413886 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:01Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.431661 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:01Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.460532 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:09:53Z\\\",\\\"message\\\":\\\"]} name:Service_openshift-ingress/router-internal-default_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0125 00:09:53.330571 6392 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z]\\\\nI0125 00:09:53.330579 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:01Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.470606 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.470685 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.470709 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.470743 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.470771 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:01Z","lastTransitionTime":"2026-01-25T00:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.574706 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.574802 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.574827 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.574860 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.574881 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:01Z","lastTransitionTime":"2026-01-25T00:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.678800 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.678910 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.678929 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.678956 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.678974 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:01Z","lastTransitionTime":"2026-01-25T00:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.783202 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.783297 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.783317 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.783345 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.783363 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:01Z","lastTransitionTime":"2026-01-25T00:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.887040 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.887162 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.887191 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.887224 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.887253 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:01Z","lastTransitionTime":"2026-01-25T00:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.991888 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.991961 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.991981 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.992008 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.992029 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:01Z","lastTransitionTime":"2026-01-25T00:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.050915 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 04:30:25.774616097 +0000 UTC Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.089534 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.089632 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.089641 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.089575 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:02 crc kubenswrapper[4947]: E0125 00:10:02.089830 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:02 crc kubenswrapper[4947]: E0125 00:10:02.089981 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:02 crc kubenswrapper[4947]: E0125 00:10:02.090092 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:02 crc kubenswrapper[4947]: E0125 00:10:02.090304 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.099087 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.099294 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.099379 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.099487 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.099519 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:02Z","lastTransitionTime":"2026-01-25T00:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.203838 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.203912 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.203931 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.203958 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.203979 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:02Z","lastTransitionTime":"2026-01-25T00:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.308209 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.308306 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.308331 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.308364 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.308390 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:02Z","lastTransitionTime":"2026-01-25T00:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.411797 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.411896 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.411918 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.411944 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.411993 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:02Z","lastTransitionTime":"2026-01-25T00:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.514385 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.514461 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.514477 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.514499 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.514515 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:02Z","lastTransitionTime":"2026-01-25T00:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.617803 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.617910 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.617937 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.617974 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.618004 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:02Z","lastTransitionTime":"2026-01-25T00:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.721197 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.721271 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.721294 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.721327 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.721351 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:02Z","lastTransitionTime":"2026-01-25T00:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.824941 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.825015 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.825040 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.825071 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.825093 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:02Z","lastTransitionTime":"2026-01-25T00:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.928231 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.928309 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.928328 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.928353 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.928376 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:02Z","lastTransitionTime":"2026-01-25T00:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.031303 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.031392 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.031418 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.031452 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.031479 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:03Z","lastTransitionTime":"2026-01-25T00:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.051546 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 09:10:43.30461461 +0000 UTC Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.135533 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.135597 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.135618 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.135642 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.135661 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:03Z","lastTransitionTime":"2026-01-25T00:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.244082 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.244202 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.244225 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.244258 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.244277 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:03Z","lastTransitionTime":"2026-01-25T00:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.348672 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.348781 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.348811 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.348851 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.348880 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:03Z","lastTransitionTime":"2026-01-25T00:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.453397 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.453500 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.453523 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.453563 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.453586 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:03Z","lastTransitionTime":"2026-01-25T00:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.556547 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.556631 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.556659 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.556691 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.556714 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:03Z","lastTransitionTime":"2026-01-25T00:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.660347 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.660435 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.660459 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.660493 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.660515 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:03Z","lastTransitionTime":"2026-01-25T00:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.764415 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.764480 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.764498 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.764526 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.764545 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:03Z","lastTransitionTime":"2026-01-25T00:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.867530 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.867621 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.867640 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.867670 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.867691 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:03Z","lastTransitionTime":"2026-01-25T00:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.971524 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.971590 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.971607 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.971634 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.971653 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:03Z","lastTransitionTime":"2026-01-25T00:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.052008 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 09:15:25.965877319 +0000 UTC Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.075745 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.075824 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.075843 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.075875 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.075895 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:04Z","lastTransitionTime":"2026-01-25T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.089204 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.089304 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.089305 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.089404 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:04 crc kubenswrapper[4947]: E0125 00:10:04.089418 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:04 crc kubenswrapper[4947]: E0125 00:10:04.089566 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:04 crc kubenswrapper[4947]: E0125 00:10:04.089693 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:04 crc kubenswrapper[4947]: E0125 00:10:04.089816 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.179830 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.179887 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.179901 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.179920 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.179933 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:04Z","lastTransitionTime":"2026-01-25T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.283647 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.283721 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.283741 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.283767 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.283785 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:04Z","lastTransitionTime":"2026-01-25T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.358747 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.358812 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.358830 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.358855 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.358874 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:04Z","lastTransitionTime":"2026-01-25T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:04 crc kubenswrapper[4947]: E0125 00:10:04.379849 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:04Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.384666 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.384727 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.384746 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.384772 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.384789 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:04Z","lastTransitionTime":"2026-01-25T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:04 crc kubenswrapper[4947]: E0125 00:10:04.405764 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:04Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.411374 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.411441 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.411454 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.411475 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.411491 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:04Z","lastTransitionTime":"2026-01-25T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:04 crc kubenswrapper[4947]: E0125 00:10:04.431816 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:04Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.437216 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.437384 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.437471 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.437578 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.437665 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:04Z","lastTransitionTime":"2026-01-25T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:04 crc kubenswrapper[4947]: E0125 00:10:04.461008 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:04Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.466055 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.466193 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.466226 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.466264 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.466290 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:04Z","lastTransitionTime":"2026-01-25T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:04 crc kubenswrapper[4947]: E0125 00:10:04.487520 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:04Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:04 crc kubenswrapper[4947]: E0125 00:10:04.487774 4947 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.489831 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.489903 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.489922 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.489949 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.489971 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:04Z","lastTransitionTime":"2026-01-25T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.509707 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs\") pod \"network-metrics-daemon-hj7kb\" (UID: \"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\") " pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:04 crc kubenswrapper[4947]: E0125 00:10:04.509967 4947 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 00:10:04 crc kubenswrapper[4947]: E0125 00:10:04.510078 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs podName:9a64fbf1-68fc-4379-9bb7-009c4f2cc812 nodeName:}" failed. No retries permitted until 2026-01-25 00:10:12.510048022 +0000 UTC m=+51.743038502 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs") pod "network-metrics-daemon-hj7kb" (UID: "9a64fbf1-68fc-4379-9bb7-009c4f2cc812") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.593632 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.593968 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.594189 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.594396 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.594621 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:04Z","lastTransitionTime":"2026-01-25T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.698419 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.698933 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.699198 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.699477 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.699911 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:04Z","lastTransitionTime":"2026-01-25T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.804399 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.804480 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.804498 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.804529 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.804554 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:04Z","lastTransitionTime":"2026-01-25T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.907506 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.907594 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.907620 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.907654 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.907673 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:04Z","lastTransitionTime":"2026-01-25T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.011639 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.011715 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.011735 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.011765 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.011785 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:05Z","lastTransitionTime":"2026-01-25T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.053057 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 04:07:25.210671664 +0000 UTC Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.115212 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.115273 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.115290 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.115314 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.115333 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:05Z","lastTransitionTime":"2026-01-25T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.219464 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.219541 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.219560 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.219591 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.219610 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:05Z","lastTransitionTime":"2026-01-25T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.322478 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.322529 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.322546 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.322572 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.322590 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:05Z","lastTransitionTime":"2026-01-25T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.426316 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.426422 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.426444 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.426472 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.426489 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:05Z","lastTransitionTime":"2026-01-25T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.529790 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.529876 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.529901 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.529935 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.529958 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:05Z","lastTransitionTime":"2026-01-25T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.633715 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.633796 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.633823 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.633852 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.633869 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:05Z","lastTransitionTime":"2026-01-25T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.737951 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.738013 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.738028 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.738050 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.738061 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:05Z","lastTransitionTime":"2026-01-25T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.842350 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.842417 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.842429 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.842453 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.842470 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:05Z","lastTransitionTime":"2026-01-25T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.945868 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.945935 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.945949 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.945968 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.945981 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:05Z","lastTransitionTime":"2026-01-25T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.049637 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.049678 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.049697 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.049722 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.049738 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:06Z","lastTransitionTime":"2026-01-25T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.054100 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 16:18:47.33146468 +0000 UTC Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.088749 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.088828 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.088771 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.088771 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:06 crc kubenswrapper[4947]: E0125 00:10:06.088973 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:06 crc kubenswrapper[4947]: E0125 00:10:06.089164 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:06 crc kubenswrapper[4947]: E0125 00:10:06.089278 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:06 crc kubenswrapper[4947]: E0125 00:10:06.089363 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.152919 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.153008 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.153030 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.153064 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.153088 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:06Z","lastTransitionTime":"2026-01-25T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.256892 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.256963 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.256985 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.257014 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.257033 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:06Z","lastTransitionTime":"2026-01-25T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.360088 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.360175 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.360193 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.360218 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.360238 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:06Z","lastTransitionTime":"2026-01-25T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.463470 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.463577 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.463604 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.463644 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.463670 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:06Z","lastTransitionTime":"2026-01-25T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.567678 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.567769 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.567796 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.567832 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.567857 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:06Z","lastTransitionTime":"2026-01-25T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.671737 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.672099 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.672331 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.672485 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.672635 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:06Z","lastTransitionTime":"2026-01-25T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.777678 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.777773 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.777787 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.777811 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.777823 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:06Z","lastTransitionTime":"2026-01-25T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.880648 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.880685 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.880696 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.880713 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.880724 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:06Z","lastTransitionTime":"2026-01-25T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.983562 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.983685 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.983711 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.983740 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.983761 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:06Z","lastTransitionTime":"2026-01-25T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.054570 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 13:32:58.21086024 +0000 UTC Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.054929 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.067096 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.078692 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:07Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.087211 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.087285 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.087308 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.087342 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.087360 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:07Z","lastTransitionTime":"2026-01-25T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.090924 4947 scope.go:117] "RemoveContainer" containerID="911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.098838 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:07Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.121340 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hj7kb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hj7kb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:07Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.140981 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:07Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.163102 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:07Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.179495 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:07Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.190286 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.190614 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.190736 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.190848 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.190945 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:07Z","lastTransitionTime":"2026-01-25T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.199393 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:07Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.218203 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:07Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.232572 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba95f90e-9162-425c-9ac3-d655ea43cfa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a063931e33e98f7125d995ea29ad219e7143c0fd14a4ad57cba7608afd98b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f21149a21977361a24c0bc54887c8ca18a3d72cd6714bc26228b1d049c181a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxpzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:07Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.250707 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:07Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.270437 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:07Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.283067 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:07Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.295397 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.295483 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.295509 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.295541 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.295564 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:07Z","lastTransitionTime":"2026-01-25T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.300909 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:07Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.316838 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:07Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.330704 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:07Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.359716 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:09:53Z\\\",\\\"message\\\":\\\"]} name:Service_openshift-ingress/router-internal-default_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0125 00:09:53.330571 6392 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z]\\\\nI0125 00:09:53.330579 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:07Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.398639 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.398702 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.398719 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.398744 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.398766 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:07Z","lastTransitionTime":"2026-01-25T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.501700 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.502178 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.502432 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.502688 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.502903 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:07Z","lastTransitionTime":"2026-01-25T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.605967 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.606036 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.606060 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.606091 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.606114 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:07Z","lastTransitionTime":"2026-01-25T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.709768 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.710156 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.710172 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.710193 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.710209 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:07Z","lastTransitionTime":"2026-01-25T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.813230 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.813289 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.813302 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.813324 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.813341 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:07Z","lastTransitionTime":"2026-01-25T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.917168 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.917236 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.917254 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.917282 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.917301 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:07Z","lastTransitionTime":"2026-01-25T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.020344 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.020390 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.020404 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.020428 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.020446 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:08Z","lastTransitionTime":"2026-01-25T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.054749 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 21:24:55.305132547 +0000 UTC Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.089626 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.089702 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.089809 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:08 crc kubenswrapper[4947]: E0125 00:10:08.089802 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.089865 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:08 crc kubenswrapper[4947]: E0125 00:10:08.090400 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:08 crc kubenswrapper[4947]: E0125 00:10:08.090413 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:08 crc kubenswrapper[4947]: E0125 00:10:08.090484 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.123404 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.123481 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.123502 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.123534 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.123556 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:08Z","lastTransitionTime":"2026-01-25T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.225888 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.225946 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.225958 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.225985 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.226000 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:08Z","lastTransitionTime":"2026-01-25T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.329467 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.329534 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.329553 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.329576 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.329591 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:08Z","lastTransitionTime":"2026-01-25T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.433199 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.433275 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.433294 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.433324 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.433347 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:08Z","lastTransitionTime":"2026-01-25T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.505758 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovnkube-controller/1.log" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.514471 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerStarted","Data":"46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885"} Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.514994 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.535625 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:08Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.536727 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.536773 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.536786 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.536805 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.536816 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:08Z","lastTransitionTime":"2026-01-25T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.550358 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hj7kb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hj7kb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:08Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.563886 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:08Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.578120 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:08Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.592619 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:08Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.609784 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:08Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.629004 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:08Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.641428 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.641506 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.641519 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.641539 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.641554 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:08Z","lastTransitionTime":"2026-01-25T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.643239 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:08Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.656031 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:08Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.668214 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd10641-e4b3-4d72-ba91-ed540316eb7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c24388261b500b452e9b238428ea72fc01a87302083b6cddbbcc646b025b088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9ad4f367bb191845eecb4178669597ee6fc54195b6fa93f8fc17a4c7a43c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18c77a13ec1895e24cdc4a6d652e02f94c48856ab226683b6908e6524a6d66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:08Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.686985 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:08Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.699501 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba95f90e-9162-425c-9ac3-d655ea43cfa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a063931e33e98f7125d995ea29ad219e7143c0fd14a4ad57cba7608afd98b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f21149a21977361a24c0bc54887c8ca18a3d72cd6714bc26228b1d049c181a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxpzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:08Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.715015 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:08Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.732284 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:08Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.744683 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.744753 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.744769 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.744800 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.744825 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:08Z","lastTransitionTime":"2026-01-25T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.751628 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:08Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.766606 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:08Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.801099 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:09:53Z\\\",\\\"message\\\":\\\"]} name:Service_openshift-ingress/router-internal-default_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0125 00:09:53.330571 6392 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z]\\\\nI0125 00:09:53.330579 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:08Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.848288 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.848344 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.848355 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.848375 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.848384 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:08Z","lastTransitionTime":"2026-01-25T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.951115 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.951266 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.951292 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.951333 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.951360 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:08Z","lastTransitionTime":"2026-01-25T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.054873 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 07:32:57.283269971 +0000 UTC Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.054990 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.055064 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.055085 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.055114 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.055162 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:09Z","lastTransitionTime":"2026-01-25T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.158419 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.158479 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.158496 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.158523 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.158542 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:09Z","lastTransitionTime":"2026-01-25T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.261236 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.261302 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.261319 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.261343 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.261363 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:09Z","lastTransitionTime":"2026-01-25T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.364419 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.364500 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.364523 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.364552 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.364578 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:09Z","lastTransitionTime":"2026-01-25T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.467085 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.467189 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.467213 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.467247 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.467272 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:09Z","lastTransitionTime":"2026-01-25T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.522369 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovnkube-controller/2.log" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.523530 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovnkube-controller/1.log" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.527760 4947 generic.go:334] "Generic (PLEG): container finished" podID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerID="46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885" exitCode=1 Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.527863 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerDied","Data":"46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885"} Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.528043 4947 scope.go:117] "RemoveContainer" containerID="911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.529369 4947 scope.go:117] "RemoveContainer" containerID="46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885" Jan 25 00:10:09 crc kubenswrapper[4947]: E0125 00:10:09.529851 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.553534 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:09Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.569574 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.569622 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.569639 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.569670 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.569686 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:09Z","lastTransitionTime":"2026-01-25T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.583255 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:09Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.605668 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:09Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.623594 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:09Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.641166 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hj7kb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hj7kb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:09Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.666816 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:09Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.673170 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.673225 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.673242 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.673270 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.673289 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:09Z","lastTransitionTime":"2026-01-25T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.688393 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:09Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.706183 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:09Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.725420 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd10641-e4b3-4d72-ba91-ed540316eb7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c24388261b500b452e9b238428ea72fc01a87302083b6cddbbcc646b025b088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9ad4f367bb191845eecb4178669597ee6fc54195b6fa93f8fc17a4c7a43c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18c77a13ec1895e24cdc4a6d652e02f94c48856ab226683b6908e6524a6d66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:09Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.745927 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:09Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.776576 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba95f90e-9162-425c-9ac3-d655ea43cfa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a063931e33e98f7125d995ea29ad219e7143c0fd14a4ad57cba7608afd98b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f21149a21977361a24c0bc54887c8ca18a3d72cd6714bc26228b1d049c181a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxpzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:09Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.779397 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.779452 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.779472 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.779502 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.779521 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:09Z","lastTransitionTime":"2026-01-25T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.801511 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:09Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.822729 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:09Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.857278 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:09:53Z\\\",\\\"message\\\":\\\"]} name:Service_openshift-ingress/router-internal-default_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0125 00:09:53.330571 6392 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z]\\\\nI0125 00:09:53.330579 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:10:08Z\\\",\\\"message\\\":\\\" 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579455 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579460 6597 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-9fspn in node crc\\\\nI0125 00:10:08.579464 6597 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-9fspn after 0 failed attempt(s)\\\\nI0125 00:10:08.579468 6597 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579476 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579506 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579511 6597 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF0125 00:10:08.579524 6597 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:09Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.879403 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:09Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.883319 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.883389 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.883406 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.883433 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.883451 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:09Z","lastTransitionTime":"2026-01-25T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.907851 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:09Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.924368 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:09Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.987656 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.987716 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.987739 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.987810 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.987837 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:09Z","lastTransitionTime":"2026-01-25T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.055973 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 14:58:58.27375202 +0000 UTC Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.088916 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.088916 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.088997 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.089423 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:10 crc kubenswrapper[4947]: E0125 00:10:10.089681 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:10 crc kubenswrapper[4947]: E0125 00:10:10.089918 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:10 crc kubenswrapper[4947]: E0125 00:10:10.090175 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:10 crc kubenswrapper[4947]: E0125 00:10:10.090412 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.091057 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.091112 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.091168 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.091199 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.091220 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:10Z","lastTransitionTime":"2026-01-25T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.195299 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.195372 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.195391 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.195422 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.195441 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:10Z","lastTransitionTime":"2026-01-25T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.298552 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.298609 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.298627 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.298651 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.298669 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:10Z","lastTransitionTime":"2026-01-25T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.402493 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.402559 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.402577 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.402604 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.402621 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:10Z","lastTransitionTime":"2026-01-25T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.505849 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.505915 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.505934 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.505959 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.505977 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:10Z","lastTransitionTime":"2026-01-25T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.535612 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovnkube-controller/2.log" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.540454 4947 scope.go:117] "RemoveContainer" containerID="46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885" Jan 25 00:10:10 crc kubenswrapper[4947]: E0125 00:10:10.540648 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.563431 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:10Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.584565 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:10Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.603953 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:10Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.609424 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.609469 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.609485 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.609509 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.609527 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:10Z","lastTransitionTime":"2026-01-25T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.624335 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:10Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.683355 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:10Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.697162 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:10Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.706893 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:10Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.711844 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.711959 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.711975 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.711994 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.712006 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:10Z","lastTransitionTime":"2026-01-25T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.718003 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hj7kb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hj7kb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:10Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.732746 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:10Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.744086 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd10641-e4b3-4d72-ba91-ed540316eb7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c24388261b500b452e9b238428ea72fc01a87302083b6cddbbcc646b025b088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9ad4f367bb191845eecb4178669597ee6fc54195b6fa93f8fc17a4c7a43c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18c77a13ec1895e24cdc4a6d652e02f94c48856ab226683b6908e6524a6d66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:10Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.755794 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:10Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.774680 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba95f90e-9162-425c-9ac3-d655ea43cfa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a063931e33e98f7125d995ea29ad219e7143c0fd14a4ad57cba7608afd98b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f21149a21977361a24c0bc54887c8ca18a3d72cd6714bc26228b1d049c181a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxpzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:10Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.790634 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:10Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.802863 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:10Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.815086 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.815150 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.815163 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.815183 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.815197 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:10Z","lastTransitionTime":"2026-01-25T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.818046 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:10Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.828861 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:10Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.851021 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:10:08Z\\\",\\\"message\\\":\\\" 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579455 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579460 6597 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-9fspn in node crc\\\\nI0125 00:10:08.579464 6597 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-9fspn after 0 failed attempt(s)\\\\nI0125 00:10:08.579468 6597 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579476 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579506 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579511 6597 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF0125 00:10:08.579524 6597 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:10:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:10Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.918450 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.918498 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.918518 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.918543 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.918561 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:10Z","lastTransitionTime":"2026-01-25T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.022099 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.022224 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.022242 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.022270 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.022290 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:11Z","lastTransitionTime":"2026-01-25T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.057068 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 00:17:21.542198067 +0000 UTC Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.111909 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:11Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.125772 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.125836 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.125856 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.125884 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.125903 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:11Z","lastTransitionTime":"2026-01-25T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.134044 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:11Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.152373 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:11Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.189094 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:10:08Z\\\",\\\"message\\\":\\\" 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579455 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579460 6597 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-9fspn in node crc\\\\nI0125 00:10:08.579464 6597 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-9fspn after 0 failed attempt(s)\\\\nI0125 00:10:08.579468 6597 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579476 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579506 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579511 6597 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF0125 00:10:08.579524 6597 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:10:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:11Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.209911 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:11Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.225583 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hj7kb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hj7kb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:11Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.229063 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.229159 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.229180 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.229211 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.229229 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:11Z","lastTransitionTime":"2026-01-25T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.241697 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:11Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.259620 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:11Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.278242 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:11Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.297285 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:11Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.323522 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:11Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.332801 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.333039 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.333219 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.333391 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.333510 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:11Z","lastTransitionTime":"2026-01-25T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.340830 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:11Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.361275 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:11Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.376776 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd10641-e4b3-4d72-ba91-ed540316eb7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c24388261b500b452e9b238428ea72fc01a87302083b6cddbbcc646b025b088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9ad4f367bb191845eecb4178669597ee6fc54195b6fa93f8fc17a4c7a43c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18c77a13ec1895e24cdc4a6d652e02f94c48856ab226683b6908e6524a6d66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:11Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.396871 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:11Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.416960 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba95f90e-9162-425c-9ac3-d655ea43cfa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a063931e33e98f7125d995ea29ad219e7143c0fd14a4ad57cba7608afd98b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f21149a21977361a24c0bc54887c8ca18a3d72cd6714bc26228b1d049c181a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxpzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:11Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.433812 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:11Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.436249 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.436309 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.436329 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.436546 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.436566 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:11Z","lastTransitionTime":"2026-01-25T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.540610 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.540671 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.540685 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.540708 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.540749 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:11Z","lastTransitionTime":"2026-01-25T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.644251 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.644296 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.644312 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.644340 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.644357 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:11Z","lastTransitionTime":"2026-01-25T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.747784 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.747851 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.747870 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.747896 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.747914 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:11Z","lastTransitionTime":"2026-01-25T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.797930 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.798057 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.798109 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.798191 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:11 crc kubenswrapper[4947]: E0125 00:10:11.798249 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:10:43.798204955 +0000 UTC m=+83.031195435 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.798309 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:11 crc kubenswrapper[4947]: E0125 00:10:11.798348 4947 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 00:10:11 crc kubenswrapper[4947]: E0125 00:10:11.798359 4947 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 00:10:11 crc kubenswrapper[4947]: E0125 00:10:11.798508 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 00:10:43.798474311 +0000 UTC m=+83.031464791 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 00:10:11 crc kubenswrapper[4947]: E0125 00:10:11.798544 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 00:10:43.798526712 +0000 UTC m=+83.031517192 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 00:10:11 crc kubenswrapper[4947]: E0125 00:10:11.798553 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 00:10:11 crc kubenswrapper[4947]: E0125 00:10:11.798592 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 00:10:11 crc kubenswrapper[4947]: E0125 00:10:11.798617 4947 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:10:11 crc kubenswrapper[4947]: E0125 00:10:11.798706 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-25 00:10:43.798684886 +0000 UTC m=+83.031675366 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:10:11 crc kubenswrapper[4947]: E0125 00:10:11.798817 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 00:10:11 crc kubenswrapper[4947]: E0125 00:10:11.798844 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 00:10:11 crc kubenswrapper[4947]: E0125 00:10:11.798862 4947 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:10:11 crc kubenswrapper[4947]: E0125 00:10:11.798929 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-25 00:10:43.798910451 +0000 UTC m=+83.031900941 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.852919 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.852999 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.853016 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.853044 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.853083 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:11Z","lastTransitionTime":"2026-01-25T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.956199 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.956267 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.956286 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.956311 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.956332 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:11Z","lastTransitionTime":"2026-01-25T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.057608 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 06:36:42.729751033 +0000 UTC Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.060560 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.060625 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.060646 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.060672 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.060688 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:12Z","lastTransitionTime":"2026-01-25T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.089095 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.089096 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.089190 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:12 crc kubenswrapper[4947]: E0125 00:10:12.089216 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.089162 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:12 crc kubenswrapper[4947]: E0125 00:10:12.089374 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:12 crc kubenswrapper[4947]: E0125 00:10:12.089497 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:12 crc kubenswrapper[4947]: E0125 00:10:12.089656 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.163881 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.163947 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.163971 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.164003 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.164023 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:12Z","lastTransitionTime":"2026-01-25T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.267190 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.267268 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.267286 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.267335 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.267352 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:12Z","lastTransitionTime":"2026-01-25T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.371023 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.371083 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.371101 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.371154 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.371174 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:12Z","lastTransitionTime":"2026-01-25T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.474051 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.474169 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.474205 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.474237 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.474257 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:12Z","lastTransitionTime":"2026-01-25T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.578673 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.578757 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.578779 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.578806 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.578825 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:12Z","lastTransitionTime":"2026-01-25T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.607768 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs\") pod \"network-metrics-daemon-hj7kb\" (UID: \"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\") " pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:12 crc kubenswrapper[4947]: E0125 00:10:12.608010 4947 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 00:10:12 crc kubenswrapper[4947]: E0125 00:10:12.608184 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs podName:9a64fbf1-68fc-4379-9bb7-009c4f2cc812 nodeName:}" failed. No retries permitted until 2026-01-25 00:10:28.608107837 +0000 UTC m=+67.841098307 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs") pod "network-metrics-daemon-hj7kb" (UID: "9a64fbf1-68fc-4379-9bb7-009c4f2cc812") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.682353 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.682421 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.682437 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.682462 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.682480 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:12Z","lastTransitionTime":"2026-01-25T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.785670 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.785741 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.785765 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.785798 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.785837 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:12Z","lastTransitionTime":"2026-01-25T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.889483 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.889572 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.889643 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.889684 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.889710 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:12Z","lastTransitionTime":"2026-01-25T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.992832 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.992912 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.992943 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.993044 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.993063 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:12Z","lastTransitionTime":"2026-01-25T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.058638 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 07:34:13.141633163 +0000 UTC Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.095279 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.095330 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.095339 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.095354 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.095366 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:13Z","lastTransitionTime":"2026-01-25T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.198947 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.199013 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.199024 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.199047 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.199064 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:13Z","lastTransitionTime":"2026-01-25T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.302468 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.302912 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.302973 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.303009 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.303033 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:13Z","lastTransitionTime":"2026-01-25T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.406535 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.406620 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.406640 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.406668 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.406687 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:13Z","lastTransitionTime":"2026-01-25T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.509713 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.509791 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.509817 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.509850 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.509872 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:13Z","lastTransitionTime":"2026-01-25T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.613599 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.613678 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.613700 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.613730 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.613752 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:13Z","lastTransitionTime":"2026-01-25T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.717818 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.717897 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.717920 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.717947 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.717965 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:13Z","lastTransitionTime":"2026-01-25T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.821761 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.821837 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.821862 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.821895 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.821915 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:13Z","lastTransitionTime":"2026-01-25T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.925939 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.926002 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.926020 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.926048 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.926065 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:13Z","lastTransitionTime":"2026-01-25T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.030288 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.030349 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.030366 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.030391 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.030409 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:14Z","lastTransitionTime":"2026-01-25T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.058910 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 08:00:18.730851083 +0000 UTC Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.089511 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.089591 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.089528 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.089528 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:14 crc kubenswrapper[4947]: E0125 00:10:14.089717 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:14 crc kubenswrapper[4947]: E0125 00:10:14.089924 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:14 crc kubenswrapper[4947]: E0125 00:10:14.090051 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:14 crc kubenswrapper[4947]: E0125 00:10:14.090201 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.133878 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.133936 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.133959 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.133988 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.134009 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:14Z","lastTransitionTime":"2026-01-25T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.237037 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.237102 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.237121 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.237188 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.237211 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:14Z","lastTransitionTime":"2026-01-25T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.340647 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.340712 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.340731 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.340757 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.340775 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:14Z","lastTransitionTime":"2026-01-25T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.443649 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.443722 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.443740 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.444089 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.444118 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:14Z","lastTransitionTime":"2026-01-25T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.547805 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.547858 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.547876 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.547903 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.547921 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:14Z","lastTransitionTime":"2026-01-25T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.651237 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.651664 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.651849 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.652044 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.652261 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:14Z","lastTransitionTime":"2026-01-25T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.756083 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.756449 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.756814 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.756976 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.757109 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:14Z","lastTransitionTime":"2026-01-25T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.837240 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.837301 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.837320 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.837347 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.837368 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:14Z","lastTransitionTime":"2026-01-25T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:14 crc kubenswrapper[4947]: E0125 00:10:14.860550 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:14Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.867037 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.867095 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.867118 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.867184 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.867209 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:14Z","lastTransitionTime":"2026-01-25T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:14 crc kubenswrapper[4947]: E0125 00:10:14.890951 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:14Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.897150 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.897208 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.897226 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.897251 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.897269 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:14Z","lastTransitionTime":"2026-01-25T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:14 crc kubenswrapper[4947]: E0125 00:10:14.921623 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:14Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.928860 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.928936 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.928970 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.929004 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.929029 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:14Z","lastTransitionTime":"2026-01-25T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:14 crc kubenswrapper[4947]: E0125 00:10:14.952808 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:14Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.958258 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.958642 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.958775 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.959089 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.959264 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:14Z","lastTransitionTime":"2026-01-25T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:14 crc kubenswrapper[4947]: E0125 00:10:14.981617 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:14Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:14 crc kubenswrapper[4947]: E0125 00:10:14.981842 4947 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.983769 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.983817 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.983835 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.983859 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.983878 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:14Z","lastTransitionTime":"2026-01-25T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.060011 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 13:04:14.471442865 +0000 UTC Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.087086 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.087217 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.087245 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.087274 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.087298 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:15Z","lastTransitionTime":"2026-01-25T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.190809 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.190887 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.190906 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.190964 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.190983 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:15Z","lastTransitionTime":"2026-01-25T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.294503 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.294572 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.294584 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.294605 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.294619 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:15Z","lastTransitionTime":"2026-01-25T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.397887 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.397968 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.397987 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.398024 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.398043 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:15Z","lastTransitionTime":"2026-01-25T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.501476 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.501531 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.501544 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.501564 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.501579 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:15Z","lastTransitionTime":"2026-01-25T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.605477 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.605558 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.605576 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.605605 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.605626 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:15Z","lastTransitionTime":"2026-01-25T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.709025 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.709101 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.709118 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.709177 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.709199 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:15Z","lastTransitionTime":"2026-01-25T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.813324 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.813415 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.813442 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.813474 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.813496 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:15Z","lastTransitionTime":"2026-01-25T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.917254 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.917316 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.917332 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.917354 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.917370 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:15Z","lastTransitionTime":"2026-01-25T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.021558 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.021629 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.021647 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.021676 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.021695 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:16Z","lastTransitionTime":"2026-01-25T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.060641 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 15:47:35.936331665 +0000 UTC Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.089361 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.089412 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:16 crc kubenswrapper[4947]: E0125 00:10:16.089569 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.089594 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.089673 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:16 crc kubenswrapper[4947]: E0125 00:10:16.089814 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:16 crc kubenswrapper[4947]: E0125 00:10:16.089902 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:16 crc kubenswrapper[4947]: E0125 00:10:16.090109 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.125882 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.125953 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.125976 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.126004 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.126023 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:16Z","lastTransitionTime":"2026-01-25T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.230496 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.230577 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.230602 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.230635 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.230658 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:16Z","lastTransitionTime":"2026-01-25T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.334758 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.334823 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.334835 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.334857 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.334872 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:16Z","lastTransitionTime":"2026-01-25T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.438667 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.438724 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.438734 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.438749 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.438759 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:16Z","lastTransitionTime":"2026-01-25T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.542362 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.542441 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.542466 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.542506 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.542530 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:16Z","lastTransitionTime":"2026-01-25T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.646622 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.646700 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.646721 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.646747 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.646767 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:16Z","lastTransitionTime":"2026-01-25T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.751042 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.751174 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.751202 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.751226 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.751243 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:16Z","lastTransitionTime":"2026-01-25T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.855671 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.855762 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.855786 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.855820 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.855845 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:16Z","lastTransitionTime":"2026-01-25T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.958953 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.959033 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.959057 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.959089 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.959107 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:16Z","lastTransitionTime":"2026-01-25T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.061315 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 17:16:34.459499196 +0000 UTC Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.063305 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.063372 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.063397 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.063427 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.063448 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:17Z","lastTransitionTime":"2026-01-25T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.167098 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.167243 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.167300 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.167330 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.167390 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:17Z","lastTransitionTime":"2026-01-25T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.270944 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.271078 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.271180 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.271222 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.271285 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:17Z","lastTransitionTime":"2026-01-25T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.375274 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.375349 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.375373 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.375404 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.375429 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:17Z","lastTransitionTime":"2026-01-25T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.478909 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.479384 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.479648 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.479843 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.480074 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:17Z","lastTransitionTime":"2026-01-25T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.583599 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.583666 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.583684 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.583712 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.583732 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:17Z","lastTransitionTime":"2026-01-25T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.687713 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.687785 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.687802 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.687828 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.687846 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:17Z","lastTransitionTime":"2026-01-25T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.790915 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.790963 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.790983 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.791008 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.791026 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:17Z","lastTransitionTime":"2026-01-25T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.894471 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.894522 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.894538 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.894562 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.894579 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:17Z","lastTransitionTime":"2026-01-25T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.998684 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.998748 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.998765 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.998794 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.998812 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:17Z","lastTransitionTime":"2026-01-25T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.061875 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 05:51:42.360592676 +0000 UTC Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.088636 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.088798 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.088864 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:18 crc kubenswrapper[4947]: E0125 00:10:18.088868 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.088927 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:18 crc kubenswrapper[4947]: E0125 00:10:18.089118 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:18 crc kubenswrapper[4947]: E0125 00:10:18.089294 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:18 crc kubenswrapper[4947]: E0125 00:10:18.089470 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.102022 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.102105 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.102119 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.102192 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.102207 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:18Z","lastTransitionTime":"2026-01-25T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.205114 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.205254 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.205279 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.205310 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.205333 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:18Z","lastTransitionTime":"2026-01-25T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.308743 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.308792 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.308808 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.308826 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.308837 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:18Z","lastTransitionTime":"2026-01-25T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.412306 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.412370 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.412382 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.412405 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.412418 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:18Z","lastTransitionTime":"2026-01-25T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.516054 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.516116 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.516128 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.516150 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.516165 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:18Z","lastTransitionTime":"2026-01-25T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.620610 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.620674 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.620690 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.620711 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.620726 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:18Z","lastTransitionTime":"2026-01-25T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.724583 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.724647 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.724660 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.724682 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.724696 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:18Z","lastTransitionTime":"2026-01-25T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.828254 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.828343 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.828356 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.828374 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.828388 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:18Z","lastTransitionTime":"2026-01-25T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.931912 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.931989 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.932007 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.932034 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.932053 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:18Z","lastTransitionTime":"2026-01-25T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.036282 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.036356 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.036374 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.036404 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.036429 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:19Z","lastTransitionTime":"2026-01-25T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.062837 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 18:37:00.641101896 +0000 UTC Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.139576 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.139640 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.139660 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.139687 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.139709 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:19Z","lastTransitionTime":"2026-01-25T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.243327 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.243402 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.243420 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.243450 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.243471 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:19Z","lastTransitionTime":"2026-01-25T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.345859 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.345914 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.345933 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.345955 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.345972 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:19Z","lastTransitionTime":"2026-01-25T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.449196 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.449273 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.449296 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.449324 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.449343 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:19Z","lastTransitionTime":"2026-01-25T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.552372 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.552445 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.552468 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.552496 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.552519 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:19Z","lastTransitionTime":"2026-01-25T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.655685 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.655753 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.655779 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.655811 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.655835 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:19Z","lastTransitionTime":"2026-01-25T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.758472 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.758545 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.758563 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.758595 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.758614 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:19Z","lastTransitionTime":"2026-01-25T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.862566 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.862624 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.862645 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.862670 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.862689 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:19Z","lastTransitionTime":"2026-01-25T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.965951 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.966024 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.966043 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.966079 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.966098 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:19Z","lastTransitionTime":"2026-01-25T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.063867 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 04:06:48.660332066 +0000 UTC Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.069573 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.069647 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.069672 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.069704 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.069722 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:20Z","lastTransitionTime":"2026-01-25T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.089245 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.089343 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.089343 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.089503 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:20 crc kubenswrapper[4947]: E0125 00:10:20.089498 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:20 crc kubenswrapper[4947]: E0125 00:10:20.089655 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:20 crc kubenswrapper[4947]: E0125 00:10:20.090391 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:20 crc kubenswrapper[4947]: E0125 00:10:20.090575 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.172316 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.172383 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.172406 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.172442 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.172465 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:20Z","lastTransitionTime":"2026-01-25T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.275783 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.275859 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.275889 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.275920 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.275942 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:20Z","lastTransitionTime":"2026-01-25T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.379214 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.379271 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.379293 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.379323 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.379346 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:20Z","lastTransitionTime":"2026-01-25T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.488449 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.488569 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.488594 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.488635 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.488654 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:20Z","lastTransitionTime":"2026-01-25T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.592466 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.592521 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.592538 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.592565 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.592584 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:20Z","lastTransitionTime":"2026-01-25T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.696347 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.696425 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.696448 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.696511 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.696533 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:20Z","lastTransitionTime":"2026-01-25T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.800530 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.800578 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.800599 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.800625 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.800646 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:20Z","lastTransitionTime":"2026-01-25T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.904653 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.904734 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.904756 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.904788 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.904810 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:20Z","lastTransitionTime":"2026-01-25T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.007936 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.007997 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.008014 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.008040 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.008058 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:21Z","lastTransitionTime":"2026-01-25T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.064216 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 14:54:20.741350199 +0000 UTC Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.108596 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:21Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.110405 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.110467 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.110486 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.110512 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.110531 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:21Z","lastTransitionTime":"2026-01-25T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.132447 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:21Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.145399 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:21Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.168681 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:10:08Z\\\",\\\"message\\\":\\\" 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579455 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579460 6597 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-9fspn in node crc\\\\nI0125 00:10:08.579464 6597 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-9fspn after 0 failed attempt(s)\\\\nI0125 00:10:08.579468 6597 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579476 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579506 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579511 6597 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF0125 00:10:08.579524 6597 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:10:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:21Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.186841 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hj7kb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hj7kb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:21Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.208165 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:21Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.212950 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.213005 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.213025 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.213050 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.213068 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:21Z","lastTransitionTime":"2026-01-25T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.229282 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:21Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.244105 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:21Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.259623 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:21Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.281001 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:21Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.301818 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:21Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.315957 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.316018 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.316038 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.316065 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.316085 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:21Z","lastTransitionTime":"2026-01-25T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.319360 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:21Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.340214 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:21Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.357372 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd10641-e4b3-4d72-ba91-ed540316eb7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c24388261b500b452e9b238428ea72fc01a87302083b6cddbbcc646b025b088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9ad4f367bb191845eecb4178669597ee6fc54195b6fa93f8fc17a4c7a43c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18c77a13ec1895e24cdc4a6d652e02f94c48856ab226683b6908e6524a6d66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:21Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.376920 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:21Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.394622 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba95f90e-9162-425c-9ac3-d655ea43cfa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a063931e33e98f7125d995ea29ad219e7143c0fd14a4ad57cba7608afd98b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f21149a21977361a24c0bc54887c8ca18a3d72cd6714bc26228b1d049c181a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxpzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:21Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.413088 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:21Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.418858 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.418938 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.418966 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.418998 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.419018 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:21Z","lastTransitionTime":"2026-01-25T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.522109 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.522216 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.522241 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.522273 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.522296 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:21Z","lastTransitionTime":"2026-01-25T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.625090 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.625187 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.625207 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.625270 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.625289 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:21Z","lastTransitionTime":"2026-01-25T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.728583 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.728878 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.729046 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.729226 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.729440 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:21Z","lastTransitionTime":"2026-01-25T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.833568 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.833637 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.833656 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.833682 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.833701 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:21Z","lastTransitionTime":"2026-01-25T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.936873 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.936944 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.936967 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.936993 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.937011 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:21Z","lastTransitionTime":"2026-01-25T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.040481 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.040551 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.040570 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.040599 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.040618 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:22Z","lastTransitionTime":"2026-01-25T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.065317 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 07:11:27.889785442 +0000 UTC Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.089093 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.089179 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.089188 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:22 crc kubenswrapper[4947]: E0125 00:10:22.089262 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:22 crc kubenswrapper[4947]: E0125 00:10:22.089401 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:22 crc kubenswrapper[4947]: E0125 00:10:22.089645 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.089706 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:22 crc kubenswrapper[4947]: E0125 00:10:22.089768 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.143775 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.143848 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.143866 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.143894 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.143912 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:22Z","lastTransitionTime":"2026-01-25T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.247805 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.247866 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.247886 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.247916 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.247939 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:22Z","lastTransitionTime":"2026-01-25T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.351744 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.351810 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.351833 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.351863 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.351888 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:22Z","lastTransitionTime":"2026-01-25T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.455670 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.455742 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.455764 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.455793 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.455814 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:22Z","lastTransitionTime":"2026-01-25T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.559011 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.559088 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.559108 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.559201 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.559237 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:22Z","lastTransitionTime":"2026-01-25T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.662647 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.662708 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.662740 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.662765 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.662788 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:22Z","lastTransitionTime":"2026-01-25T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.765646 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.765707 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.765726 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.765753 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.765771 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:22Z","lastTransitionTime":"2026-01-25T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.869953 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.870010 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.870027 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.870051 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.870070 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:22Z","lastTransitionTime":"2026-01-25T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.975452 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.975512 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.975520 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.975561 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.975572 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:22Z","lastTransitionTime":"2026-01-25T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.066157 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 23:28:38.960613562 +0000 UTC Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.078697 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.078766 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.078785 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.078814 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.078833 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:23Z","lastTransitionTime":"2026-01-25T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.182577 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.182660 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.182679 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.182707 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.182729 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:23Z","lastTransitionTime":"2026-01-25T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.286849 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.286922 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.286953 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.286987 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.287005 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:23Z","lastTransitionTime":"2026-01-25T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.390852 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.390935 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.390951 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.390979 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.391002 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:23Z","lastTransitionTime":"2026-01-25T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.494497 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.494570 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.494587 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.494618 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.494634 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:23Z","lastTransitionTime":"2026-01-25T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.597938 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.598019 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.598043 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.598072 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.598093 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:23Z","lastTransitionTime":"2026-01-25T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.701621 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.701711 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.701736 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.701774 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.701799 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:23Z","lastTransitionTime":"2026-01-25T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.806049 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.806186 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.806207 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.806246 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.806274 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:23Z","lastTransitionTime":"2026-01-25T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.909508 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.909635 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.909667 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.909695 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.909714 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:23Z","lastTransitionTime":"2026-01-25T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.012799 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.012877 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.012902 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.012933 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.012957 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:24Z","lastTransitionTime":"2026-01-25T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.067117 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 12:11:50.657964341 +0000 UTC Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.089636 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.089748 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.089779 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:24 crc kubenswrapper[4947]: E0125 00:10:24.089857 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:24 crc kubenswrapper[4947]: E0125 00:10:24.090080 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.090242 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.090668 4947 scope.go:117] "RemoveContainer" containerID="46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885" Jan 25 00:10:24 crc kubenswrapper[4947]: E0125 00:10:24.090942 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" Jan 25 00:10:24 crc kubenswrapper[4947]: E0125 00:10:24.091048 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:24 crc kubenswrapper[4947]: E0125 00:10:24.091171 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.115934 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.115988 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.116005 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.116028 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.116041 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:24Z","lastTransitionTime":"2026-01-25T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.219558 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.219625 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.219646 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.219671 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.219690 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:24Z","lastTransitionTime":"2026-01-25T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.323041 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.323118 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.323170 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.323199 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.323217 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:24Z","lastTransitionTime":"2026-01-25T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.426521 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.426564 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.426574 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.426598 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.426609 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:24Z","lastTransitionTime":"2026-01-25T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.529929 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.530002 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.530026 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.530054 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.530075 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:24Z","lastTransitionTime":"2026-01-25T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.632953 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.633025 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.633039 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.633073 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.633091 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:24Z","lastTransitionTime":"2026-01-25T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.736768 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.736817 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.736829 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.736850 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.736863 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:24Z","lastTransitionTime":"2026-01-25T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.840774 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.841337 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.841594 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.841827 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.842041 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:24Z","lastTransitionTime":"2026-01-25T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.945372 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.945431 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.945460 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.945478 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.945487 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:24Z","lastTransitionTime":"2026-01-25T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.048952 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.049014 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.049040 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.049072 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.049094 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:25Z","lastTransitionTime":"2026-01-25T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.067669 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 14:54:36.444008856 +0000 UTC Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.152498 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.152563 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.152587 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.152617 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.152641 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:25Z","lastTransitionTime":"2026-01-25T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.195505 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.195568 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.195580 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.195604 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.195614 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:25Z","lastTransitionTime":"2026-01-25T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:25 crc kubenswrapper[4947]: E0125 00:10:25.214067 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:25Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.226090 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.226465 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.226609 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.226766 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.226911 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:25Z","lastTransitionTime":"2026-01-25T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:25 crc kubenswrapper[4947]: E0125 00:10:25.239871 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:25Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.244445 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.244480 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.244488 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.244503 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.244513 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:25Z","lastTransitionTime":"2026-01-25T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:25 crc kubenswrapper[4947]: E0125 00:10:25.263609 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:25Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.268312 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.268374 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.268387 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.268408 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.268421 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:25Z","lastTransitionTime":"2026-01-25T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:25 crc kubenswrapper[4947]: E0125 00:10:25.282852 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:25Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.287288 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.287434 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.287522 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.287605 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.287686 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:25Z","lastTransitionTime":"2026-01-25T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:25 crc kubenswrapper[4947]: E0125 00:10:25.305011 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:25Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:25 crc kubenswrapper[4947]: E0125 00:10:25.305331 4947 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.307471 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.307520 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.307534 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.307556 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.307568 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:25Z","lastTransitionTime":"2026-01-25T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.410747 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.410809 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.410820 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.410855 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.410867 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:25Z","lastTransitionTime":"2026-01-25T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.513896 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.513949 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.513960 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.513983 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.513993 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:25Z","lastTransitionTime":"2026-01-25T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.617424 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.617538 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.617551 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.617568 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.617594 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:25Z","lastTransitionTime":"2026-01-25T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.721529 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.721601 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.721620 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.721648 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.721666 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:25Z","lastTransitionTime":"2026-01-25T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.825205 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.825277 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.825296 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.825323 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.825343 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:25Z","lastTransitionTime":"2026-01-25T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.930524 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.930847 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.931419 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.932716 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.933572 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:25Z","lastTransitionTime":"2026-01-25T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.036774 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.037479 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.037727 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.037910 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.038088 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:26Z","lastTransitionTime":"2026-01-25T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.068691 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 17:48:17.847082923 +0000 UTC Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.089674 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.089688 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.089712 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.089718 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:26 crc kubenswrapper[4947]: E0125 00:10:26.090266 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:26 crc kubenswrapper[4947]: E0125 00:10:26.090105 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:26 crc kubenswrapper[4947]: E0125 00:10:26.090667 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:26 crc kubenswrapper[4947]: E0125 00:10:26.090829 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.140889 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.141266 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.141417 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.141607 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.141785 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:26Z","lastTransitionTime":"2026-01-25T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.245817 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.245885 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.245903 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.245929 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.245950 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:26Z","lastTransitionTime":"2026-01-25T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.349725 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.349770 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.349782 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.349800 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.349814 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:26Z","lastTransitionTime":"2026-01-25T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.453090 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.453156 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.453169 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.453187 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.453204 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:26Z","lastTransitionTime":"2026-01-25T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.556407 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.556639 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.556733 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.556809 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.556872 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:26Z","lastTransitionTime":"2026-01-25T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.659811 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.659868 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.659886 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.659906 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.659919 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:26Z","lastTransitionTime":"2026-01-25T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.763209 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.763251 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.763262 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.763279 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.763290 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:26Z","lastTransitionTime":"2026-01-25T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.866766 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.866835 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.866855 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.866881 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.866899 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:26Z","lastTransitionTime":"2026-01-25T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.969748 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.969794 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.969807 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.969828 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.969842 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:26Z","lastTransitionTime":"2026-01-25T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.070798 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 09:34:29.54738216 +0000 UTC Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.072990 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.073042 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.073058 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.073078 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.073091 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:27Z","lastTransitionTime":"2026-01-25T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.175702 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.175778 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.175797 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.175823 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.175838 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:27Z","lastTransitionTime":"2026-01-25T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.278940 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.279018 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.279037 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.279065 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.279087 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:27Z","lastTransitionTime":"2026-01-25T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.382427 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.382475 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.382488 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.382506 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.382518 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:27Z","lastTransitionTime":"2026-01-25T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.485636 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.485684 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.485699 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.485719 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.485732 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:27Z","lastTransitionTime":"2026-01-25T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.588368 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.588428 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.588453 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.588481 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.588504 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:27Z","lastTransitionTime":"2026-01-25T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.690226 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.690307 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.690322 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.690337 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.690348 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:27Z","lastTransitionTime":"2026-01-25T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.792213 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.792251 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.792264 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.792280 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.792291 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:27Z","lastTransitionTime":"2026-01-25T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.894947 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.894986 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.894998 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.895013 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.895026 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:27Z","lastTransitionTime":"2026-01-25T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.997833 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.997882 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.997892 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.997910 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.997923 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:27Z","lastTransitionTime":"2026-01-25T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.071551 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 00:33:57.31110085 +0000 UTC Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.089321 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:28 crc kubenswrapper[4947]: E0125 00:10:28.089517 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.090358 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.090440 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.090486 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:28 crc kubenswrapper[4947]: E0125 00:10:28.090698 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:28 crc kubenswrapper[4947]: E0125 00:10:28.090835 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:28 crc kubenswrapper[4947]: E0125 00:10:28.090957 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.100936 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.100979 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.101003 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.101030 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.101050 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:28Z","lastTransitionTime":"2026-01-25T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.204188 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.204233 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.204251 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.204276 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.204294 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:28Z","lastTransitionTime":"2026-01-25T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.307161 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.307235 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.307258 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.307290 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.307316 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:28Z","lastTransitionTime":"2026-01-25T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.411014 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.411075 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.411093 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.411118 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.411698 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:28Z","lastTransitionTime":"2026-01-25T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.514332 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.514409 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.514428 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.514453 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.514470 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:28Z","lastTransitionTime":"2026-01-25T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.617333 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.617398 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.617418 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.617446 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.617473 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:28Z","lastTransitionTime":"2026-01-25T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.701566 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs\") pod \"network-metrics-daemon-hj7kb\" (UID: \"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\") " pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:28 crc kubenswrapper[4947]: E0125 00:10:28.701829 4947 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 00:10:28 crc kubenswrapper[4947]: E0125 00:10:28.701939 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs podName:9a64fbf1-68fc-4379-9bb7-009c4f2cc812 nodeName:}" failed. No retries permitted until 2026-01-25 00:11:00.701910097 +0000 UTC m=+99.934900567 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs") pod "network-metrics-daemon-hj7kb" (UID: "9a64fbf1-68fc-4379-9bb7-009c4f2cc812") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.721010 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.721074 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.721092 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.721117 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.721161 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:28Z","lastTransitionTime":"2026-01-25T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.823653 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.823744 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.823758 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.823782 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.823798 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:28Z","lastTransitionTime":"2026-01-25T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.926244 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.926291 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.926327 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.926346 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.926364 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:28Z","lastTransitionTime":"2026-01-25T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.028706 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.028772 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.028797 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.028825 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.028844 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:29Z","lastTransitionTime":"2026-01-25T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.072434 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 13:15:55.892322349 +0000 UTC Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.131239 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.131342 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.131373 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.131409 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.131434 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:29Z","lastTransitionTime":"2026-01-25T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.235278 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.235331 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.235343 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.235360 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.235372 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:29Z","lastTransitionTime":"2026-01-25T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.338219 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.338293 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.338314 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.338387 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.338439 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:29Z","lastTransitionTime":"2026-01-25T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.441970 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.442030 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.442049 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.442076 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.442092 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:29Z","lastTransitionTime":"2026-01-25T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.544392 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.544429 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.544438 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.544454 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.544463 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:29Z","lastTransitionTime":"2026-01-25T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.647504 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.647570 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.647583 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.647605 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.647623 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:29Z","lastTransitionTime":"2026-01-25T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.749893 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.750004 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.750017 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.750035 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.750046 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:29Z","lastTransitionTime":"2026-01-25T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.852888 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.852931 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.852939 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.852955 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.852965 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:29Z","lastTransitionTime":"2026-01-25T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.955466 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.955518 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.955530 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.955550 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.955562 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:29Z","lastTransitionTime":"2026-01-25T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.059352 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.059707 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.059895 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.060027 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.060193 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:30Z","lastTransitionTime":"2026-01-25T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.073214 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 08:15:14.048557083 +0000 UTC Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.089742 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.089821 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.089960 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:30 crc kubenswrapper[4947]: E0125 00:10:30.089960 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.090010 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:30 crc kubenswrapper[4947]: E0125 00:10:30.090194 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:30 crc kubenswrapper[4947]: E0125 00:10:30.090306 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:30 crc kubenswrapper[4947]: E0125 00:10:30.090382 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.162964 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.163015 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.163032 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.163057 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.163074 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:30Z","lastTransitionTime":"2026-01-25T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.266171 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.266212 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.266225 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.266244 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.266257 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:30Z","lastTransitionTime":"2026-01-25T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.369223 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.369253 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.369261 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.369275 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.369285 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:30Z","lastTransitionTime":"2026-01-25T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.472036 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.472085 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.472098 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.472116 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.472150 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:30Z","lastTransitionTime":"2026-01-25T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.575884 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.575959 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.575978 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.576003 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.576020 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:30Z","lastTransitionTime":"2026-01-25T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.679236 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.679291 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.679306 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.679328 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.679345 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:30Z","lastTransitionTime":"2026-01-25T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.782737 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.782813 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.782835 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.782894 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.782911 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:30Z","lastTransitionTime":"2026-01-25T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.886412 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.886522 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.886543 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.886571 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.886598 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:30Z","lastTransitionTime":"2026-01-25T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.989253 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.989321 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.989339 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.989365 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.989387 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:30Z","lastTransitionTime":"2026-01-25T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.074254 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 13:07:41.061589687 +0000 UTC Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.091161 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.091185 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.091194 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.091210 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.091220 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:31Z","lastTransitionTime":"2026-01-25T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.106014 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.126719 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.139333 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.160209 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:10:08Z\\\",\\\"message\\\":\\\" 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579455 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579460 6597 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-9fspn in node crc\\\\nI0125 00:10:08.579464 6597 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-9fspn after 0 failed attempt(s)\\\\nI0125 00:10:08.579468 6597 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579476 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579506 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579511 6597 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF0125 00:10:08.579524 6597 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:10:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.173692 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.186807 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.193743 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.193803 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.193821 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.193849 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.193868 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:31Z","lastTransitionTime":"2026-01-25T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.199478 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hj7kb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hj7kb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.217604 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.230286 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.243362 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.258856 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.274515 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.289456 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba95f90e-9162-425c-9ac3-d655ea43cfa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a063931e33e98f7125d995ea29ad219e7143c0fd14a4ad57cba7608afd98b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f21149a21977361a24c0bc54887c8ca18a3d72cd6714bc26228b1d049c181a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxpzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.296224 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.296268 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.296280 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.296297 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.296308 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:31Z","lastTransitionTime":"2026-01-25T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.302211 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.314686 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd10641-e4b3-4d72-ba91-ed540316eb7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c24388261b500b452e9b238428ea72fc01a87302083b6cddbbcc646b025b088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9ad4f367bb191845eecb4178669597ee6fc54195b6fa93f8fc17a4c7a43c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18c77a13ec1895e24cdc4a6d652e02f94c48856ab226683b6908e6524a6d66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.329363 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.342072 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.401295 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.401346 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.401358 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.401384 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.401406 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:31Z","lastTransitionTime":"2026-01-25T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.504041 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.504086 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.504097 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.504117 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.504168 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:31Z","lastTransitionTime":"2026-01-25T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.656728 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.656763 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.656772 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.656786 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.656795 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:31Z","lastTransitionTime":"2026-01-25T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.660911 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9fspn_2d914454-2c17-47f2-aa53-aba3bfaad296/kube-multus/0.log" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.661019 4947 generic.go:334] "Generic (PLEG): container finished" podID="2d914454-2c17-47f2-aa53-aba3bfaad296" containerID="e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656" exitCode=1 Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.661081 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9fspn" event={"ID":"2d914454-2c17-47f2-aa53-aba3bfaad296","Type":"ContainerDied","Data":"e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656"} Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.661838 4947 scope.go:117] "RemoveContainer" containerID="e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.680732 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd10641-e4b3-4d72-ba91-ed540316eb7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c24388261b500b452e9b238428ea72fc01a87302083b6cddbbcc646b025b088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9ad4f367bb191845eecb4178669597ee6fc54195b6fa93f8fc17a4c7a43c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18c77a13ec1895e24cdc4a6d652e02f94c48856ab226683b6908e6524a6d66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.699371 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.712582 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba95f90e-9162-425c-9ac3-d655ea43cfa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a063931e33e98f7125d995ea29ad219e7143c0fd14a4ad57cba7608afd98b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f21149a21977361a24c0bc54887c8ca18a3d72cd6714bc26228b1d049c181a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxpzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.729792 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.741412 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.760440 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.760507 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.760527 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.760958 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.760999 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:31Z","lastTransitionTime":"2026-01-25T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.762291 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:10:08Z\\\",\\\"message\\\":\\\" 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579455 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579460 6597 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-9fspn in node crc\\\\nI0125 00:10:08.579464 6597 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-9fspn after 0 failed attempt(s)\\\\nI0125 00:10:08.579468 6597 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579476 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579506 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579511 6597 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF0125 00:10:08.579524 6597 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:10:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.780103 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.795911 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.806708 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.819913 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.836796 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.852573 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:10:30Z\\\",\\\"message\\\":\\\"2026-01-25T00:09:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9993edec-22d0-4587-a147-282f628c2ecf\\\\n2026-01-25T00:09:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9993edec-22d0-4587-a147-282f628c2ecf to /host/opt/cni/bin/\\\\n2026-01-25T00:09:45Z [verbose] multus-daemon started\\\\n2026-01-25T00:09:45Z [verbose] Readiness Indicator file check\\\\n2026-01-25T00:10:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.861670 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.863927 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.863985 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.864000 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.864018 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.864028 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:31Z","lastTransitionTime":"2026-01-25T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.874562 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hj7kb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hj7kb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.891624 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.913885 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.931509 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.967029 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.967103 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.967113 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.967149 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.967162 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:31Z","lastTransitionTime":"2026-01-25T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.070531 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.070589 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.070606 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.070630 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.070647 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:32Z","lastTransitionTime":"2026-01-25T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.075009 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 09:46:40.370629733 +0000 UTC Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.089485 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:32 crc kubenswrapper[4947]: E0125 00:10:32.089688 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.089949 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:32 crc kubenswrapper[4947]: E0125 00:10:32.090054 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.090326 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.090400 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:32 crc kubenswrapper[4947]: E0125 00:10:32.090489 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:32 crc kubenswrapper[4947]: E0125 00:10:32.090573 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.174234 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.174298 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.174308 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.174337 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.174348 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:32Z","lastTransitionTime":"2026-01-25T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.276892 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.276962 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.276984 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.277019 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.277044 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:32Z","lastTransitionTime":"2026-01-25T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.384747 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.384819 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.384838 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.384868 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.384887 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:32Z","lastTransitionTime":"2026-01-25T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.487672 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.487718 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.487731 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.487751 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.487766 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:32Z","lastTransitionTime":"2026-01-25T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.590828 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.590920 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.590937 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.590963 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.590984 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:32Z","lastTransitionTime":"2026-01-25T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.667487 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9fspn_2d914454-2c17-47f2-aa53-aba3bfaad296/kube-multus/0.log" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.667575 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9fspn" event={"ID":"2d914454-2c17-47f2-aa53-aba3bfaad296","Type":"ContainerStarted","Data":"6c0ba6afdf615533c6d174c24f3d76e5070b4436d09dbe07c9a6c595203d5470"} Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.686813 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:32Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.694303 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.694355 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.694369 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.694385 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.694411 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:32Z","lastTransitionTime":"2026-01-25T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.704980 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:32Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.719986 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:32Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.740332 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:32Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.758017 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c0ba6afdf615533c6d174c24f3d76e5070b4436d09dbe07c9a6c595203d5470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:10:30Z\\\",\\\"message\\\":\\\"2026-01-25T00:09:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9993edec-22d0-4587-a147-282f628c2ecf\\\\n2026-01-25T00:09:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9993edec-22d0-4587-a147-282f628c2ecf to /host/opt/cni/bin/\\\\n2026-01-25T00:09:45Z [verbose] multus-daemon started\\\\n2026-01-25T00:09:45Z [verbose] Readiness Indicator file check\\\\n2026-01-25T00:10:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:10:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:32Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.769728 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:32Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.782043 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hj7kb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hj7kb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:32Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.793990 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:32Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.812968 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.813038 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.813061 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.813094 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.813117 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:32Z","lastTransitionTime":"2026-01-25T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.813175 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:32Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.825850 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd10641-e4b3-4d72-ba91-ed540316eb7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c24388261b500b452e9b238428ea72fc01a87302083b6cddbbcc646b025b088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9ad4f367bb191845eecb4178669597ee6fc54195b6fa93f8fc17a4c7a43c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18c77a13ec1895e24cdc4a6d652e02f94c48856ab226683b6908e6524a6d66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:32Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.841209 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:32Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.854331 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba95f90e-9162-425c-9ac3-d655ea43cfa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a063931e33e98f7125d995ea29ad219e7143c0fd14a4ad57cba7608afd98b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f21149a21977361a24c0bc54887c8ca18a3d72cd6714bc26228b1d049c181a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxpzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:32Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.868257 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:32Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.882233 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:32Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.895997 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:32Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.915469 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.915494 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.915502 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.915516 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.915526 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:32Z","lastTransitionTime":"2026-01-25T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.924491 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:10:08Z\\\",\\\"message\\\":\\\" 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579455 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579460 6597 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-9fspn in node crc\\\\nI0125 00:10:08.579464 6597 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-9fspn after 0 failed attempt(s)\\\\nI0125 00:10:08.579468 6597 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579476 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579506 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579511 6597 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF0125 00:10:08.579524 6597 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:10:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:32Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.941673 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:32Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.017923 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.017967 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.017978 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.017993 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.018003 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:33Z","lastTransitionTime":"2026-01-25T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.075773 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 16:10:44.942338938 +0000 UTC Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.120647 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.120690 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.120704 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.120720 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.120734 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:33Z","lastTransitionTime":"2026-01-25T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.223719 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.223773 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.223786 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.223804 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.223815 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:33Z","lastTransitionTime":"2026-01-25T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.326897 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.326970 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.326981 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.327001 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.327012 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:33Z","lastTransitionTime":"2026-01-25T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.429011 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.429057 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.429069 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.429089 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.429101 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:33Z","lastTransitionTime":"2026-01-25T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.530988 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.531015 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.531026 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.531040 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.531051 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:33Z","lastTransitionTime":"2026-01-25T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.634512 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.634572 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.634596 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.634627 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.634650 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:33Z","lastTransitionTime":"2026-01-25T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.737853 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.737926 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.737947 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.737977 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.737999 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:33Z","lastTransitionTime":"2026-01-25T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.840689 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.840736 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.840747 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.840768 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.840780 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:33Z","lastTransitionTime":"2026-01-25T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.943145 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.943183 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.943194 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.943211 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.943222 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:33Z","lastTransitionTime":"2026-01-25T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.046552 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.046592 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.046602 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.046618 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.046630 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:34Z","lastTransitionTime":"2026-01-25T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.076353 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 12:12:56.414294699 +0000 UTC Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.088846 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.088957 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.088856 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.088837 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:34 crc kubenswrapper[4947]: E0125 00:10:34.089067 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:34 crc kubenswrapper[4947]: E0125 00:10:34.089221 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:34 crc kubenswrapper[4947]: E0125 00:10:34.089378 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:34 crc kubenswrapper[4947]: E0125 00:10:34.089501 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.149793 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.149865 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.149882 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.149909 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.149928 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:34Z","lastTransitionTime":"2026-01-25T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.253895 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.253973 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.254002 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.254039 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.254063 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:34Z","lastTransitionTime":"2026-01-25T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.356877 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.357185 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.357384 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.357590 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.357746 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:34Z","lastTransitionTime":"2026-01-25T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.460213 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.460277 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.460295 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.460322 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.460339 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:34Z","lastTransitionTime":"2026-01-25T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.562954 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.563028 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.563047 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.563074 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.563092 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:34Z","lastTransitionTime":"2026-01-25T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.665662 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.665704 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.665716 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.665736 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.665747 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:34Z","lastTransitionTime":"2026-01-25T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.769262 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.769340 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.769378 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.769413 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.769437 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:34Z","lastTransitionTime":"2026-01-25T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.872554 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.872630 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.872662 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.872694 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.872716 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:34Z","lastTransitionTime":"2026-01-25T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.975592 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.975658 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.975677 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.975700 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.975719 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:34Z","lastTransitionTime":"2026-01-25T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.076912 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 04:48:47.543066077 +0000 UTC Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.082731 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.082791 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.082813 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.082839 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.082857 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:35Z","lastTransitionTime":"2026-01-25T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.185655 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.185722 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.185745 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.185776 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.185802 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:35Z","lastTransitionTime":"2026-01-25T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.288477 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.288549 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.288574 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.288605 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.288628 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:35Z","lastTransitionTime":"2026-01-25T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.391890 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.391943 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.391956 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.391977 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.391996 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:35Z","lastTransitionTime":"2026-01-25T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.494772 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.494840 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.494857 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.494883 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.494902 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:35Z","lastTransitionTime":"2026-01-25T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.535891 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.535951 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.535968 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.535994 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.536018 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:35Z","lastTransitionTime":"2026-01-25T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:35 crc kubenswrapper[4947]: E0125 00:10:35.555983 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:35Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.560910 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.560964 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.560977 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.561000 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.561012 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:35Z","lastTransitionTime":"2026-01-25T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:35 crc kubenswrapper[4947]: E0125 00:10:35.577788 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:35Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.582333 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.582399 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.582417 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.582444 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.582462 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:35Z","lastTransitionTime":"2026-01-25T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:35 crc kubenswrapper[4947]: E0125 00:10:35.602954 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:35Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.607259 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.607315 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.607334 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.607361 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.607379 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:35Z","lastTransitionTime":"2026-01-25T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:35 crc kubenswrapper[4947]: E0125 00:10:35.622914 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:35Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.627708 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.627765 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.627777 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.627796 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.627811 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:35Z","lastTransitionTime":"2026-01-25T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:35 crc kubenswrapper[4947]: E0125 00:10:35.641631 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:35Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:35 crc kubenswrapper[4947]: E0125 00:10:35.641873 4947 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.644142 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.644382 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.644400 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.644423 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.644441 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:35Z","lastTransitionTime":"2026-01-25T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.751848 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.752064 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.752087 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.752253 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.752277 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:35Z","lastTransitionTime":"2026-01-25T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.856224 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.856317 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.856369 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.856399 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.856417 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:35Z","lastTransitionTime":"2026-01-25T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.959930 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.959990 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.960006 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.960037 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.960054 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:35Z","lastTransitionTime":"2026-01-25T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.062920 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.062989 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.063005 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.063026 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.063039 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:36Z","lastTransitionTime":"2026-01-25T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.077673 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 06:17:56.934438845 +0000 UTC Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.089011 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.089166 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:36 crc kubenswrapper[4947]: E0125 00:10:36.089193 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.089247 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:36 crc kubenswrapper[4947]: E0125 00:10:36.089364 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.089702 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:36 crc kubenswrapper[4947]: E0125 00:10:36.089781 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:36 crc kubenswrapper[4947]: E0125 00:10:36.090036 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.090229 4947 scope.go:117] "RemoveContainer" containerID="46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.166668 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.166745 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.166764 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.166793 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.166817 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:36Z","lastTransitionTime":"2026-01-25T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.269958 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.270016 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.270033 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.270053 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.270066 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:36Z","lastTransitionTime":"2026-01-25T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.373140 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.373181 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.373191 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.373206 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.373218 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:36Z","lastTransitionTime":"2026-01-25T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.475935 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.475985 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.475997 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.476015 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.476026 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:36Z","lastTransitionTime":"2026-01-25T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.578821 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.578873 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.578891 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.578914 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.578946 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:36Z","lastTransitionTime":"2026-01-25T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.681358 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.681446 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.681798 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.682062 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.682167 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:36Z","lastTransitionTime":"2026-01-25T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.682776 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovnkube-controller/2.log" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.686712 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerStarted","Data":"a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e"} Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.698715 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:36Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.709230 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:36Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.722808 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:36Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.732666 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:36Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.748811 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:10:08Z\\\",\\\"message\\\":\\\" 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579455 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579460 6597 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-9fspn in node crc\\\\nI0125 00:10:08.579464 6597 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-9fspn after 0 failed attempt(s)\\\\nI0125 00:10:08.579468 6597 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579476 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579506 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579511 6597 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF0125 00:10:08.579524 6597 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:10:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:10:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:36Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.763661 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:36Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.776640 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hj7kb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hj7kb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:36Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.785174 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.785214 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.785225 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.785245 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.785256 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:36Z","lastTransitionTime":"2026-01-25T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.790671 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:36Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.802041 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:36Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.813141 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:36Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.823020 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:36Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.837917 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:36Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.850117 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c0ba6afdf615533c6d174c24f3d76e5070b4436d09dbe07c9a6c595203d5470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:10:30Z\\\",\\\"message\\\":\\\"2026-01-25T00:09:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9993edec-22d0-4587-a147-282f628c2ecf\\\\n2026-01-25T00:09:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9993edec-22d0-4587-a147-282f628c2ecf to /host/opt/cni/bin/\\\\n2026-01-25T00:09:45Z [verbose] multus-daemon started\\\\n2026-01-25T00:09:45Z [verbose] Readiness Indicator file check\\\\n2026-01-25T00:10:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:10:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:36Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.861525 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:36Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.872015 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd10641-e4b3-4d72-ba91-ed540316eb7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c24388261b500b452e9b238428ea72fc01a87302083b6cddbbcc646b025b088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9ad4f367bb191845eecb4178669597ee6fc54195b6fa93f8fc17a4c7a43c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18c77a13ec1895e24cdc4a6d652e02f94c48856ab226683b6908e6524a6d66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:36Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.883190 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:36Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.888176 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.888230 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.888248 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.888273 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.888291 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:36Z","lastTransitionTime":"2026-01-25T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.900571 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba95f90e-9162-425c-9ac3-d655ea43cfa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a063931e33e98f7125d995ea29ad219e7143c0fd14a4ad57cba7608afd98b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f21149a21977361a24c0bc54887c8ca18a3d72cd6714bc26228b1d049c181a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxpzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:36Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.991271 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.991313 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.991329 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.991347 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.991360 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:36Z","lastTransitionTime":"2026-01-25T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.078457 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 15:09:33.86890964 +0000 UTC Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.093612 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.093683 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.093701 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.093728 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.093746 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:37Z","lastTransitionTime":"2026-01-25T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.196867 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.196910 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.196918 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.196943 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.196962 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:37Z","lastTransitionTime":"2026-01-25T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.299812 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.299886 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.299912 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.299945 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.299970 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:37Z","lastTransitionTime":"2026-01-25T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.403952 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.404037 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.404061 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.404097 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.404156 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:37Z","lastTransitionTime":"2026-01-25T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.507263 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.507361 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.507379 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.507406 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.507424 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:37Z","lastTransitionTime":"2026-01-25T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.611461 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.611522 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.611540 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.611568 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.611589 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:37Z","lastTransitionTime":"2026-01-25T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.692325 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovnkube-controller/3.log" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.692949 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovnkube-controller/2.log" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.696056 4947 generic.go:334] "Generic (PLEG): container finished" podID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerID="a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e" exitCode=1 Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.696105 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerDied","Data":"a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e"} Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.696182 4947 scope.go:117] "RemoveContainer" containerID="46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.697740 4947 scope.go:117] "RemoveContainer" containerID="a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e" Jan 25 00:10:37 crc kubenswrapper[4947]: E0125 00:10:37.698159 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.715077 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.715180 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.715203 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.715226 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.715246 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:37Z","lastTransitionTime":"2026-01-25T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.716689 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:37Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.732221 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:37Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.756940 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:10:08Z\\\",\\\"message\\\":\\\" 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579455 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579460 6597 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-9fspn in node crc\\\\nI0125 00:10:08.579464 6597 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-9fspn after 0 failed attempt(s)\\\\nI0125 00:10:08.579468 6597 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579476 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579506 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579511 6597 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF0125 00:10:08.579524 6597 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:10:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:10:37Z\\\",\\\"message\\\":\\\"5 00:10:36.980299 6986 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-9fspn in node crc\\\\nI0125 00:10:36.980305 6986 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-9fspn after 0 failed attempt(s)\\\\nI0125 00:10:36.980311 6986 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-9fspn\\\\nI0125 00:10:36.980181 6986 lb_config.go:1031] Cluster endpoints for openshift-ingress/router-internal-default for network=default are: map[]\\\\nI0125 00:10:36.980302 6986 services_controller.go:443] Built service openshift-machine-api/control-plane-machine-set-operator LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.41\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0125 00:10:36.980332 6986 services_controller.go:443] Built service openshift-ingress/router-internal-default LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.176\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:80, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:10:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:37Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.772324 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:37Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.787643 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:37Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.799764 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:37Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.811607 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:37Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.817752 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.817813 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.817831 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.817857 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.817877 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:37Z","lastTransitionTime":"2026-01-25T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.828497 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:37Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.842438 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c0ba6afdf615533c6d174c24f3d76e5070b4436d09dbe07c9a6c595203d5470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:10:30Z\\\",\\\"message\\\":\\\"2026-01-25T00:09:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9993edec-22d0-4587-a147-282f628c2ecf\\\\n2026-01-25T00:09:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9993edec-22d0-4587-a147-282f628c2ecf to /host/opt/cni/bin/\\\\n2026-01-25T00:09:45Z [verbose] multus-daemon started\\\\n2026-01-25T00:09:45Z [verbose] Readiness Indicator file check\\\\n2026-01-25T00:10:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:10:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:37Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.854082 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:37Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.865995 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hj7kb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hj7kb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:37Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.882104 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:37Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.895358 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:37Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.914790 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:37Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.919836 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.919880 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.919889 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.919906 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.919918 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:37Z","lastTransitionTime":"2026-01-25T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.932335 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd10641-e4b3-4d72-ba91-ed540316eb7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c24388261b500b452e9b238428ea72fc01a87302083b6cddbbcc646b025b088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9ad4f367bb191845eecb4178669597ee6fc54195b6fa93f8fc17a4c7a43c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18c77a13ec1895e24cdc4a6d652e02f94c48856ab226683b6908e6524a6d66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:37Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.951144 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:37Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.966858 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba95f90e-9162-425c-9ac3-d655ea43cfa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a063931e33e98f7125d995ea29ad219e7143c0fd14a4ad57cba7608afd98b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f21149a21977361a24c0bc54887c8ca18a3d72cd6714bc26228b1d049c181a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxpzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:37Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.022768 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.022822 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.022836 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.022855 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.022867 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:38Z","lastTransitionTime":"2026-01-25T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.078548 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 02:45:43.771394302 +0000 UTC Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.088923 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.089014 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.089020 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.089035 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:38 crc kubenswrapper[4947]: E0125 00:10:38.089283 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:38 crc kubenswrapper[4947]: E0125 00:10:38.089385 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:38 crc kubenswrapper[4947]: E0125 00:10:38.089596 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:38 crc kubenswrapper[4947]: E0125 00:10:38.089651 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.126164 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.126206 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.126224 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.126245 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.126261 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:38Z","lastTransitionTime":"2026-01-25T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.228938 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.228986 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.228999 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.229018 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.229032 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:38Z","lastTransitionTime":"2026-01-25T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.331932 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.331986 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.332011 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.332044 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.332067 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:38Z","lastTransitionTime":"2026-01-25T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.435108 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.435206 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.435224 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.435256 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.435276 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:38Z","lastTransitionTime":"2026-01-25T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.537625 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.537699 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.537723 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.537754 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.537778 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:38Z","lastTransitionTime":"2026-01-25T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.640612 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.640662 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.640680 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.640703 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.640723 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:38Z","lastTransitionTime":"2026-01-25T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.702253 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovnkube-controller/3.log" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.747505 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.747576 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.747598 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.747624 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.747644 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:38Z","lastTransitionTime":"2026-01-25T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.850992 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.851061 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.851080 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.851108 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.851155 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:38Z","lastTransitionTime":"2026-01-25T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.953591 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.953631 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.953675 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.953696 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.953711 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:38Z","lastTransitionTime":"2026-01-25T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.056253 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.056291 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.056305 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.056356 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.056371 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:39Z","lastTransitionTime":"2026-01-25T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.078933 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 20:24:03.664078769 +0000 UTC Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.159253 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.159493 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.159510 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.159534 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.159550 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:39Z","lastTransitionTime":"2026-01-25T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.262202 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.262269 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.262288 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.262315 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.262345 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:39Z","lastTransitionTime":"2026-01-25T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.365983 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.366041 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.366053 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.366074 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.366432 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:39Z","lastTransitionTime":"2026-01-25T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.470019 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.470100 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.470165 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.470199 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.470220 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:39Z","lastTransitionTime":"2026-01-25T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.574068 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.574165 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.574186 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.574211 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.574230 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:39Z","lastTransitionTime":"2026-01-25T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.677013 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.677160 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.677182 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.677208 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.677226 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:39Z","lastTransitionTime":"2026-01-25T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.780467 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.780514 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.780530 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.780557 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.780575 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:39Z","lastTransitionTime":"2026-01-25T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.884859 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.884941 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.884963 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.884994 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.885017 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:39Z","lastTransitionTime":"2026-01-25T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.989251 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.989293 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.989306 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.989323 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.989362 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:39Z","lastTransitionTime":"2026-01-25T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.079942 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 05:30:43.218016665 +0000 UTC Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.089291 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.089320 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:40 crc kubenswrapper[4947]: E0125 00:10:40.089466 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.090242 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:40 crc kubenswrapper[4947]: E0125 00:10:40.090494 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.090857 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:40 crc kubenswrapper[4947]: E0125 00:10:40.090911 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:40 crc kubenswrapper[4947]: E0125 00:10:40.090948 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.092830 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.092902 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.092925 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.092955 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.092977 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:40Z","lastTransitionTime":"2026-01-25T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.105161 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.196304 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.196391 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.196418 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.196455 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.196487 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:40Z","lastTransitionTime":"2026-01-25T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.300267 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.300333 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.300350 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.300375 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.300394 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:40Z","lastTransitionTime":"2026-01-25T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.403368 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.403444 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.403473 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.403501 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.403518 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:40Z","lastTransitionTime":"2026-01-25T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.506867 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.506912 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.506926 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.506944 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.506956 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:40Z","lastTransitionTime":"2026-01-25T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.610220 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.610301 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.610318 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.610344 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.610362 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:40Z","lastTransitionTime":"2026-01-25T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.717699 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.717751 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.717763 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.717816 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.717831 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:40Z","lastTransitionTime":"2026-01-25T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.820963 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.821027 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.821049 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.821074 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.821091 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:40Z","lastTransitionTime":"2026-01-25T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.926265 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.926346 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.926370 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.926399 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.926419 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:40Z","lastTransitionTime":"2026-01-25T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.030058 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.030118 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.030161 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.030186 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.030204 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:41Z","lastTransitionTime":"2026-01-25T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.080822 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 15:05:48.535474002 +0000 UTC Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.107501 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.127219 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.132315 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.132378 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.132396 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.132420 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.132438 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:41Z","lastTransitionTime":"2026-01-25T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.140918 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.170683 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:10:08Z\\\",\\\"message\\\":\\\" 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579455 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579460 6597 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-9fspn in node crc\\\\nI0125 00:10:08.579464 6597 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-9fspn after 0 failed attempt(s)\\\\nI0125 00:10:08.579468 6597 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579476 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579506 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579511 6597 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF0125 00:10:08.579524 6597 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:10:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:10:37Z\\\",\\\"message\\\":\\\"5 00:10:36.980299 6986 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-9fspn in node crc\\\\nI0125 00:10:36.980305 6986 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-9fspn after 0 failed attempt(s)\\\\nI0125 00:10:36.980311 6986 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-9fspn\\\\nI0125 00:10:36.980181 6986 lb_config.go:1031] Cluster endpoints for openshift-ingress/router-internal-default for network=default are: map[]\\\\nI0125 00:10:36.980302 6986 services_controller.go:443] Built service openshift-machine-api/control-plane-machine-set-operator LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.41\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0125 00:10:36.980332 6986 services_controller.go:443] Built service openshift-ingress/router-internal-default LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.176\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:80, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:10:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.183119 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.198829 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.215520 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.227626 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.236951 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.237017 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.237037 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.237061 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.237077 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:41Z","lastTransitionTime":"2026-01-25T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.240793 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.258514 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c0ba6afdf615533c6d174c24f3d76e5070b4436d09dbe07c9a6c595203d5470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:10:30Z\\\",\\\"message\\\":\\\"2026-01-25T00:09:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9993edec-22d0-4587-a147-282f628c2ecf\\\\n2026-01-25T00:09:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9993edec-22d0-4587-a147-282f628c2ecf to /host/opt/cni/bin/\\\\n2026-01-25T00:09:45Z [verbose] multus-daemon started\\\\n2026-01-25T00:09:45Z [verbose] Readiness Indicator file check\\\\n2026-01-25T00:10:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:10:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.272828 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.286225 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hj7kb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hj7kb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.300815 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.320290 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.332726 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd10641-e4b3-4d72-ba91-ed540316eb7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c24388261b500b452e9b238428ea72fc01a87302083b6cddbbcc646b025b088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9ad4f367bb191845eecb4178669597ee6fc54195b6fa93f8fc17a4c7a43c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18c77a13ec1895e24cdc4a6d652e02f94c48856ab226683b6908e6524a6d66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.339964 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.340043 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.340058 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.340075 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.340086 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:41Z","lastTransitionTime":"2026-01-25T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.352768 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.368401 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba95f90e-9162-425c-9ac3-d655ea43cfa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a063931e33e98f7125d995ea29ad219e7143c0fd14a4ad57cba7608afd98b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f21149a21977361a24c0bc54887c8ca18a3d72cd6714bc26228b1d049c181a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxpzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.382262 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d4f57b1-279d-46a6-a753-2f9221644cfa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20298eba3286e5999a381eba946a8d66115b05b2c0b73c61c7c005aa95bd1f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28d9e5ded99984c96a07848bed082c840a86b273e0809a7103e07e789b81147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e28d9e5ded99984c96a07848bed082c840a86b273e0809a7103e07e789b81147\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.443818 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.443885 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.443912 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.443946 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.443970 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:41Z","lastTransitionTime":"2026-01-25T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.545589 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.545622 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.545630 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.545642 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.545650 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:41Z","lastTransitionTime":"2026-01-25T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.648432 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.648461 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.648469 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.648482 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.648490 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:41Z","lastTransitionTime":"2026-01-25T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.751420 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.751464 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.751478 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.751496 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.751510 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:41Z","lastTransitionTime":"2026-01-25T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.854458 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.854514 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.854533 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.854556 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.854574 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:41Z","lastTransitionTime":"2026-01-25T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.956279 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.956414 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.956426 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.956441 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.956453 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:41Z","lastTransitionTime":"2026-01-25T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.059156 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.059200 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.059212 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.059231 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.059245 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:42Z","lastTransitionTime":"2026-01-25T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.081409 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 02:28:06.932825344 +0000 UTC Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.088847 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.089009 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:42 crc kubenswrapper[4947]: E0125 00:10:42.089213 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.089291 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.089326 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:42 crc kubenswrapper[4947]: E0125 00:10:42.089956 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:42 crc kubenswrapper[4947]: E0125 00:10:42.090081 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:42 crc kubenswrapper[4947]: E0125 00:10:42.090194 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.161702 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.161817 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.161836 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.161857 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.161876 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:42Z","lastTransitionTime":"2026-01-25T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.263840 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.263965 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.263985 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.264007 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.264023 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:42Z","lastTransitionTime":"2026-01-25T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.366825 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.366880 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.366897 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.366919 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.366935 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:42Z","lastTransitionTime":"2026-01-25T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.469532 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.469580 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.469599 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.469622 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.469639 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:42Z","lastTransitionTime":"2026-01-25T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.572794 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.572862 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.572886 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.572914 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.572935 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:42Z","lastTransitionTime":"2026-01-25T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.675634 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.675701 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.675721 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.675750 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.675771 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:42Z","lastTransitionTime":"2026-01-25T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.781544 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.782095 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.782114 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.782167 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.782186 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:42Z","lastTransitionTime":"2026-01-25T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.885448 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.885522 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.885540 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.885568 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.885592 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:42Z","lastTransitionTime":"2026-01-25T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.988859 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.988932 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.988946 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.988968 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.988985 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:42Z","lastTransitionTime":"2026-01-25T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.081737 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 12:47:08.668750045 +0000 UTC Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.092588 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.092635 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.092647 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.092665 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.092678 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:43Z","lastTransitionTime":"2026-01-25T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.195540 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.195607 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.195625 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.195651 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.195670 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:43Z","lastTransitionTime":"2026-01-25T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.298628 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.298693 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.298710 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.298733 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.298751 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:43Z","lastTransitionTime":"2026-01-25T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.402281 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.402344 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.402363 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.402386 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.402403 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:43Z","lastTransitionTime":"2026-01-25T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.505327 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.505393 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.505412 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.505438 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.505457 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:43Z","lastTransitionTime":"2026-01-25T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.608853 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.608914 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.608936 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.608967 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.608989 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:43Z","lastTransitionTime":"2026-01-25T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.711581 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.711625 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.711645 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.711672 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.711690 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:43Z","lastTransitionTime":"2026-01-25T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.814589 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.814645 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.814663 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.814688 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.814704 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:43Z","lastTransitionTime":"2026-01-25T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.888709 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:10:43 crc kubenswrapper[4947]: E0125 00:10:43.888888 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:47.888853956 +0000 UTC m=+147.121844436 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.888954 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.889069 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.889221 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.889285 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:43 crc kubenswrapper[4947]: E0125 00:10:43.889296 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 00:10:43 crc kubenswrapper[4947]: E0125 00:10:43.889368 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 00:10:43 crc kubenswrapper[4947]: E0125 00:10:43.889368 4947 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 00:10:43 crc kubenswrapper[4947]: E0125 00:10:43.889394 4947 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:10:43 crc kubenswrapper[4947]: E0125 00:10:43.889478 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 00:11:47.889451991 +0000 UTC m=+147.122442471 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 00:10:43 crc kubenswrapper[4947]: E0125 00:10:43.889516 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-25 00:11:47.889500642 +0000 UTC m=+147.122491122 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:10:43 crc kubenswrapper[4947]: E0125 00:10:43.889530 4947 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 00:10:43 crc kubenswrapper[4947]: E0125 00:10:43.889672 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 00:11:47.889643635 +0000 UTC m=+147.122634115 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 00:10:43 crc kubenswrapper[4947]: E0125 00:10:43.890109 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 00:10:43 crc kubenswrapper[4947]: E0125 00:10:43.890201 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 00:10:43 crc kubenswrapper[4947]: E0125 00:10:43.890230 4947 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:10:43 crc kubenswrapper[4947]: E0125 00:10:43.890620 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-25 00:11:47.890327891 +0000 UTC m=+147.123318381 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.918764 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.918833 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.918852 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.918876 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.918893 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:43Z","lastTransitionTime":"2026-01-25T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.022522 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.022588 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.022611 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.022641 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.022663 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:44Z","lastTransitionTime":"2026-01-25T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.082311 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 16:02:42.275813165 +0000 UTC Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.088612 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.088685 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:44 crc kubenswrapper[4947]: E0125 00:10:44.088834 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.089080 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:44 crc kubenswrapper[4947]: E0125 00:10:44.089262 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.089479 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:44 crc kubenswrapper[4947]: E0125 00:10:44.089494 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:44 crc kubenswrapper[4947]: E0125 00:10:44.089785 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.126080 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.126180 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.126206 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.126229 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.126247 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:44Z","lastTransitionTime":"2026-01-25T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.229585 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.229649 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.229666 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.229699 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.229720 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:44Z","lastTransitionTime":"2026-01-25T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.332809 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.332874 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.332898 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.332927 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.332949 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:44Z","lastTransitionTime":"2026-01-25T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.436046 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.436110 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.436160 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.436185 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.436202 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:44Z","lastTransitionTime":"2026-01-25T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.539479 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.539562 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.539585 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.539615 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.539637 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:44Z","lastTransitionTime":"2026-01-25T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.642327 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.642376 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.642387 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.642403 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.642414 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:44Z","lastTransitionTime":"2026-01-25T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.745487 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.745563 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.745581 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.745608 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.745628 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:44Z","lastTransitionTime":"2026-01-25T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.848797 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.848860 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.848877 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.848901 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.848918 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:44Z","lastTransitionTime":"2026-01-25T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.951891 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.951952 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.951970 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.951991 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.952010 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:44Z","lastTransitionTime":"2026-01-25T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.054836 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.054898 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.054915 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.054939 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.054957 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:45Z","lastTransitionTime":"2026-01-25T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.082737 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 12:06:06.269193241 +0000 UTC Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.157326 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.157380 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.157396 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.157418 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.157436 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:45Z","lastTransitionTime":"2026-01-25T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.188820 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.189971 4947 scope.go:117] "RemoveContainer" containerID="a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e" Jan 25 00:10:45 crc kubenswrapper[4947]: E0125 00:10:45.190252 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.210346 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.260738 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.260783 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.260801 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.260823 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.260839 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:45Z","lastTransitionTime":"2026-01-25T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.318058 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-2w6nd" podStartSLOduration=64.318029079 podStartE2EDuration="1m4.318029079s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:10:45.280793701 +0000 UTC m=+84.513784171" watchObservedRunningTime="2026-01-25 00:10:45.318029079 +0000 UTC m=+84.551019559" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.362690 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.362725 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.362735 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.362751 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.362762 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:45Z","lastTransitionTime":"2026-01-25T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.376892 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" podStartSLOduration=64.376865442 podStartE2EDuration="1m4.376865442s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:10:45.351648828 +0000 UTC m=+84.584639318" watchObservedRunningTime="2026-01-25 00:10:45.376865442 +0000 UTC m=+84.609855912" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.377088 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-9fspn" podStartSLOduration=64.377080757 podStartE2EDuration="1m4.377080757s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:10:45.375523519 +0000 UTC m=+84.608514029" watchObservedRunningTime="2026-01-25 00:10:45.377080757 +0000 UTC m=+84.610071237" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.393593 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-hf8gg" podStartSLOduration=64.393569879 podStartE2EDuration="1m4.393569879s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:10:45.392976424 +0000 UTC m=+84.625966884" watchObservedRunningTime="2026-01-25 00:10:45.393569879 +0000 UTC m=+84.626560359" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.433552 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=66.433524652 podStartE2EDuration="1m6.433524652s" podCreationTimestamp="2026-01-25 00:09:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:10:45.432739353 +0000 UTC m=+84.665729813" watchObservedRunningTime="2026-01-25 00:10:45.433524652 +0000 UTC m=+84.666515172" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.464485 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.464534 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.464542 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.464555 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.464565 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:45Z","lastTransitionTime":"2026-01-25T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.534984 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" podStartSLOduration=64.534961253 podStartE2EDuration="1m4.534961253s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:10:45.534282086 +0000 UTC m=+84.767272526" watchObservedRunningTime="2026-01-25 00:10:45.534961253 +0000 UTC m=+84.767951723" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.548080 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=5.548053681 podStartE2EDuration="5.548053681s" podCreationTimestamp="2026-01-25 00:10:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:10:45.547463598 +0000 UTC m=+84.780454068" watchObservedRunningTime="2026-01-25 00:10:45.548053681 +0000 UTC m=+84.781044181" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.566606 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.566808 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.566867 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.566927 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.567006 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:45Z","lastTransitionTime":"2026-01-25T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.586631 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=65.586604201 podStartE2EDuration="1m5.586604201s" podCreationTimestamp="2026-01-25 00:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:10:45.569253108 +0000 UTC m=+84.802243628" watchObservedRunningTime="2026-01-25 00:10:45.586604201 +0000 UTC m=+84.819594681" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.653319 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.653378 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.653396 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.653421 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.653438 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:45Z","lastTransitionTime":"2026-01-25T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.708227 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=38.708193483 podStartE2EDuration="38.708193483s" podCreationTimestamp="2026-01-25 00:10:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:10:45.585594656 +0000 UTC m=+84.818585096" watchObservedRunningTime="2026-01-25 00:10:45.708193483 +0000 UTC m=+84.941183963" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.708439 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb"] Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.709067 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.713425 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.713819 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.714995 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.715308 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.731409 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podStartSLOduration=64.731388048 podStartE2EDuration="1m4.731388048s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:10:45.731345446 +0000 UTC m=+84.964335916" watchObservedRunningTime="2026-01-25 00:10:45.731388048 +0000 UTC m=+84.964378508" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.811285 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ceb79108-eaf9-42eb-9d3a-125e321f4004-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-nrhlb\" (UID: \"ceb79108-eaf9-42eb-9d3a-125e321f4004\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.811402 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ceb79108-eaf9-42eb-9d3a-125e321f4004-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-nrhlb\" (UID: \"ceb79108-eaf9-42eb-9d3a-125e321f4004\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.811436 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ceb79108-eaf9-42eb-9d3a-125e321f4004-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-nrhlb\" (UID: \"ceb79108-eaf9-42eb-9d3a-125e321f4004\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.811519 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ceb79108-eaf9-42eb-9d3a-125e321f4004-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-nrhlb\" (UID: \"ceb79108-eaf9-42eb-9d3a-125e321f4004\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.811580 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ceb79108-eaf9-42eb-9d3a-125e321f4004-service-ca\") pod \"cluster-version-operator-5c965bbfc6-nrhlb\" (UID: \"ceb79108-eaf9-42eb-9d3a-125e321f4004\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.913677 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ceb79108-eaf9-42eb-9d3a-125e321f4004-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-nrhlb\" (UID: \"ceb79108-eaf9-42eb-9d3a-125e321f4004\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.913843 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ceb79108-eaf9-42eb-9d3a-125e321f4004-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-nrhlb\" (UID: \"ceb79108-eaf9-42eb-9d3a-125e321f4004\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.913898 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ceb79108-eaf9-42eb-9d3a-125e321f4004-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-nrhlb\" (UID: \"ceb79108-eaf9-42eb-9d3a-125e321f4004\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.913932 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ceb79108-eaf9-42eb-9d3a-125e321f4004-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-nrhlb\" (UID: \"ceb79108-eaf9-42eb-9d3a-125e321f4004\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.913972 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ceb79108-eaf9-42eb-9d3a-125e321f4004-service-ca\") pod \"cluster-version-operator-5c965bbfc6-nrhlb\" (UID: \"ceb79108-eaf9-42eb-9d3a-125e321f4004\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.914902 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ceb79108-eaf9-42eb-9d3a-125e321f4004-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-nrhlb\" (UID: \"ceb79108-eaf9-42eb-9d3a-125e321f4004\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.915039 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ceb79108-eaf9-42eb-9d3a-125e321f4004-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-nrhlb\" (UID: \"ceb79108-eaf9-42eb-9d3a-125e321f4004\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.915541 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ceb79108-eaf9-42eb-9d3a-125e321f4004-service-ca\") pod \"cluster-version-operator-5c965bbfc6-nrhlb\" (UID: \"ceb79108-eaf9-42eb-9d3a-125e321f4004\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.926622 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ceb79108-eaf9-42eb-9d3a-125e321f4004-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-nrhlb\" (UID: \"ceb79108-eaf9-42eb-9d3a-125e321f4004\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.963545 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ceb79108-eaf9-42eb-9d3a-125e321f4004-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-nrhlb\" (UID: \"ceb79108-eaf9-42eb-9d3a-125e321f4004\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" Jan 25 00:10:46 crc kubenswrapper[4947]: I0125 00:10:46.023406 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" Jan 25 00:10:46 crc kubenswrapper[4947]: I0125 00:10:46.082894 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 01:51:00.760056424 +0000 UTC Jan 25 00:10:46 crc kubenswrapper[4947]: I0125 00:10:46.083001 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 25 00:10:46 crc kubenswrapper[4947]: I0125 00:10:46.089580 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:46 crc kubenswrapper[4947]: I0125 00:10:46.089667 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:46 crc kubenswrapper[4947]: E0125 00:10:46.089750 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:46 crc kubenswrapper[4947]: I0125 00:10:46.089693 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:46 crc kubenswrapper[4947]: I0125 00:10:46.089668 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:46 crc kubenswrapper[4947]: E0125 00:10:46.090005 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:46 crc kubenswrapper[4947]: E0125 00:10:46.090059 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:46 crc kubenswrapper[4947]: E0125 00:10:46.090191 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:46 crc kubenswrapper[4947]: I0125 00:10:46.096569 4947 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 25 00:10:46 crc kubenswrapper[4947]: I0125 00:10:46.744650 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" event={"ID":"ceb79108-eaf9-42eb-9d3a-125e321f4004","Type":"ContainerStarted","Data":"9d57fef4f0074537df38a77431aafeb8bb062633ba4720c1650ce7c4a8711b09"} Jan 25 00:10:47 crc kubenswrapper[4947]: I0125 00:10:47.750225 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" event={"ID":"ceb79108-eaf9-42eb-9d3a-125e321f4004","Type":"ContainerStarted","Data":"8f650e3519d18c65f98b862b0a2197200c291afbf46ff4bb8f6930702f67e577"} Jan 25 00:10:47 crc kubenswrapper[4947]: I0125 00:10:47.770584 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" podStartSLOduration=66.770560899 podStartE2EDuration="1m6.770560899s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:10:47.769730479 +0000 UTC m=+87.002720999" watchObservedRunningTime="2026-01-25 00:10:47.770560899 +0000 UTC m=+87.003551369" Jan 25 00:10:48 crc kubenswrapper[4947]: I0125 00:10:48.089177 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:48 crc kubenswrapper[4947]: I0125 00:10:48.089257 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:48 crc kubenswrapper[4947]: E0125 00:10:48.089341 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:48 crc kubenswrapper[4947]: I0125 00:10:48.089392 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:48 crc kubenswrapper[4947]: E0125 00:10:48.089471 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:48 crc kubenswrapper[4947]: E0125 00:10:48.089676 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:48 crc kubenswrapper[4947]: I0125 00:10:48.090079 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:48 crc kubenswrapper[4947]: E0125 00:10:48.090422 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:50 crc kubenswrapper[4947]: I0125 00:10:50.089772 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:50 crc kubenswrapper[4947]: I0125 00:10:50.089800 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:50 crc kubenswrapper[4947]: I0125 00:10:50.089824 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:50 crc kubenswrapper[4947]: I0125 00:10:50.090028 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:50 crc kubenswrapper[4947]: E0125 00:10:50.090244 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:50 crc kubenswrapper[4947]: E0125 00:10:50.090397 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:50 crc kubenswrapper[4947]: E0125 00:10:50.090650 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:50 crc kubenswrapper[4947]: E0125 00:10:50.090768 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:52 crc kubenswrapper[4947]: I0125 00:10:52.089324 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:52 crc kubenswrapper[4947]: I0125 00:10:52.089354 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:52 crc kubenswrapper[4947]: E0125 00:10:52.089624 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:52 crc kubenswrapper[4947]: I0125 00:10:52.089697 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:52 crc kubenswrapper[4947]: E0125 00:10:52.089965 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:52 crc kubenswrapper[4947]: E0125 00:10:52.090025 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:52 crc kubenswrapper[4947]: I0125 00:10:52.090324 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:52 crc kubenswrapper[4947]: E0125 00:10:52.090536 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:52 crc kubenswrapper[4947]: I0125 00:10:52.122093 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 25 00:10:54 crc kubenswrapper[4947]: I0125 00:10:54.089766 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:54 crc kubenswrapper[4947]: E0125 00:10:54.090010 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:54 crc kubenswrapper[4947]: I0125 00:10:54.090183 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:54 crc kubenswrapper[4947]: E0125 00:10:54.090318 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:54 crc kubenswrapper[4947]: I0125 00:10:54.090451 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:54 crc kubenswrapper[4947]: I0125 00:10:54.090451 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:54 crc kubenswrapper[4947]: E0125 00:10:54.090737 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:54 crc kubenswrapper[4947]: E0125 00:10:54.090810 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:56 crc kubenswrapper[4947]: I0125 00:10:56.089576 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:56 crc kubenswrapper[4947]: I0125 00:10:56.089630 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:56 crc kubenswrapper[4947]: I0125 00:10:56.089679 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:56 crc kubenswrapper[4947]: I0125 00:10:56.089607 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:56 crc kubenswrapper[4947]: E0125 00:10:56.089779 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:56 crc kubenswrapper[4947]: E0125 00:10:56.089941 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:56 crc kubenswrapper[4947]: E0125 00:10:56.090088 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:56 crc kubenswrapper[4947]: E0125 00:10:56.090213 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:58 crc kubenswrapper[4947]: I0125 00:10:58.089395 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:58 crc kubenswrapper[4947]: I0125 00:10:58.089530 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:58 crc kubenswrapper[4947]: I0125 00:10:58.090043 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:58 crc kubenswrapper[4947]: E0125 00:10:58.090262 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:58 crc kubenswrapper[4947]: I0125 00:10:58.090412 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:58 crc kubenswrapper[4947]: I0125 00:10:58.090791 4947 scope.go:117] "RemoveContainer" containerID="a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e" Jan 25 00:10:58 crc kubenswrapper[4947]: E0125 00:10:58.090940 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:58 crc kubenswrapper[4947]: E0125 00:10:58.091081 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" Jan 25 00:10:58 crc kubenswrapper[4947]: E0125 00:10:58.091187 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:58 crc kubenswrapper[4947]: E0125 00:10:58.091240 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:00 crc kubenswrapper[4947]: I0125 00:11:00.088746 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:00 crc kubenswrapper[4947]: I0125 00:11:00.088806 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:00 crc kubenswrapper[4947]: I0125 00:11:00.088779 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:00 crc kubenswrapper[4947]: E0125 00:11:00.088947 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:11:00 crc kubenswrapper[4947]: I0125 00:11:00.088990 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:00 crc kubenswrapper[4947]: E0125 00:11:00.089094 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:00 crc kubenswrapper[4947]: E0125 00:11:00.089172 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:11:00 crc kubenswrapper[4947]: E0125 00:11:00.089250 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:11:00 crc kubenswrapper[4947]: I0125 00:11:00.796781 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs\") pod \"network-metrics-daemon-hj7kb\" (UID: \"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\") " pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:00 crc kubenswrapper[4947]: E0125 00:11:00.797062 4947 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 00:11:00 crc kubenswrapper[4947]: E0125 00:11:00.797215 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs podName:9a64fbf1-68fc-4379-9bb7-009c4f2cc812 nodeName:}" failed. No retries permitted until 2026-01-25 00:12:04.797183901 +0000 UTC m=+164.030174371 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs") pod "network-metrics-daemon-hj7kb" (UID: "9a64fbf1-68fc-4379-9bb7-009c4f2cc812") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 00:11:01 crc kubenswrapper[4947]: I0125 00:11:01.133215 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=9.133189015 podStartE2EDuration="9.133189015s" podCreationTimestamp="2026-01-25 00:10:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:01.130709745 +0000 UTC m=+100.363700225" watchObservedRunningTime="2026-01-25 00:11:01.133189015 +0000 UTC m=+100.366179495" Jan 25 00:11:02 crc kubenswrapper[4947]: I0125 00:11:02.089251 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:02 crc kubenswrapper[4947]: I0125 00:11:02.089323 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:02 crc kubenswrapper[4947]: I0125 00:11:02.089391 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:02 crc kubenswrapper[4947]: E0125 00:11:02.089468 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:11:02 crc kubenswrapper[4947]: I0125 00:11:02.089530 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:02 crc kubenswrapper[4947]: E0125 00:11:02.089621 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:11:02 crc kubenswrapper[4947]: E0125 00:11:02.089747 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:02 crc kubenswrapper[4947]: E0125 00:11:02.089918 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:11:04 crc kubenswrapper[4947]: I0125 00:11:04.089689 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:04 crc kubenswrapper[4947]: E0125 00:11:04.089931 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:11:04 crc kubenswrapper[4947]: I0125 00:11:04.090047 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:04 crc kubenswrapper[4947]: E0125 00:11:04.090174 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:11:04 crc kubenswrapper[4947]: I0125 00:11:04.090261 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:04 crc kubenswrapper[4947]: I0125 00:11:04.090398 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:04 crc kubenswrapper[4947]: E0125 00:11:04.090394 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:04 crc kubenswrapper[4947]: E0125 00:11:04.090541 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:11:06 crc kubenswrapper[4947]: I0125 00:11:06.089307 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:06 crc kubenswrapper[4947]: I0125 00:11:06.089419 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:06 crc kubenswrapper[4947]: I0125 00:11:06.089460 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:06 crc kubenswrapper[4947]: I0125 00:11:06.089622 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:06 crc kubenswrapper[4947]: E0125 00:11:06.090191 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:11:06 crc kubenswrapper[4947]: E0125 00:11:06.090469 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:11:06 crc kubenswrapper[4947]: E0125 00:11:06.090619 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:06 crc kubenswrapper[4947]: E0125 00:11:06.090730 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:11:08 crc kubenswrapper[4947]: I0125 00:11:08.088887 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:08 crc kubenswrapper[4947]: I0125 00:11:08.088981 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:08 crc kubenswrapper[4947]: I0125 00:11:08.088912 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:08 crc kubenswrapper[4947]: E0125 00:11:08.089185 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:11:08 crc kubenswrapper[4947]: E0125 00:11:08.089329 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:08 crc kubenswrapper[4947]: E0125 00:11:08.089596 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:11:08 crc kubenswrapper[4947]: I0125 00:11:08.090945 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:08 crc kubenswrapper[4947]: E0125 00:11:08.091422 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:11:10 crc kubenswrapper[4947]: I0125 00:11:10.089648 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:10 crc kubenswrapper[4947]: I0125 00:11:10.089648 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:10 crc kubenswrapper[4947]: E0125 00:11:10.089859 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:11:10 crc kubenswrapper[4947]: I0125 00:11:10.089883 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:10 crc kubenswrapper[4947]: I0125 00:11:10.089798 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:10 crc kubenswrapper[4947]: E0125 00:11:10.090047 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:11:10 crc kubenswrapper[4947]: E0125 00:11:10.090238 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:10 crc kubenswrapper[4947]: E0125 00:11:10.090358 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:11:12 crc kubenswrapper[4947]: I0125 00:11:12.088661 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:12 crc kubenswrapper[4947]: I0125 00:11:12.088676 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:12 crc kubenswrapper[4947]: I0125 00:11:12.088782 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:12 crc kubenswrapper[4947]: I0125 00:11:12.089487 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:12 crc kubenswrapper[4947]: E0125 00:11:12.089664 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:11:12 crc kubenswrapper[4947]: E0125 00:11:12.089944 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:12 crc kubenswrapper[4947]: E0125 00:11:12.090108 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:11:12 crc kubenswrapper[4947]: E0125 00:11:12.090236 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:11:13 crc kubenswrapper[4947]: I0125 00:11:13.090647 4947 scope.go:117] "RemoveContainer" containerID="a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e" Jan 25 00:11:13 crc kubenswrapper[4947]: E0125 00:11:13.090897 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" Jan 25 00:11:14 crc kubenswrapper[4947]: I0125 00:11:14.088871 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:14 crc kubenswrapper[4947]: I0125 00:11:14.088902 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:14 crc kubenswrapper[4947]: I0125 00:11:14.088902 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:14 crc kubenswrapper[4947]: E0125 00:11:14.089702 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:11:14 crc kubenswrapper[4947]: E0125 00:11:14.089842 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:11:14 crc kubenswrapper[4947]: I0125 00:11:14.088957 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:14 crc kubenswrapper[4947]: E0125 00:11:14.089956 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:11:14 crc kubenswrapper[4947]: E0125 00:11:14.090267 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:16 crc kubenswrapper[4947]: I0125 00:11:16.088786 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:16 crc kubenswrapper[4947]: I0125 00:11:16.088801 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:16 crc kubenswrapper[4947]: E0125 00:11:16.089956 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:16 crc kubenswrapper[4947]: I0125 00:11:16.088838 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:16 crc kubenswrapper[4947]: E0125 00:11:16.090587 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:11:16 crc kubenswrapper[4947]: I0125 00:11:16.088811 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:16 crc kubenswrapper[4947]: E0125 00:11:16.090944 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:11:16 crc kubenswrapper[4947]: E0125 00:11:16.090257 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:11:17 crc kubenswrapper[4947]: I0125 00:11:17.868562 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9fspn_2d914454-2c17-47f2-aa53-aba3bfaad296/kube-multus/1.log" Jan 25 00:11:17 crc kubenswrapper[4947]: I0125 00:11:17.869244 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9fspn_2d914454-2c17-47f2-aa53-aba3bfaad296/kube-multus/0.log" Jan 25 00:11:17 crc kubenswrapper[4947]: I0125 00:11:17.869331 4947 generic.go:334] "Generic (PLEG): container finished" podID="2d914454-2c17-47f2-aa53-aba3bfaad296" containerID="6c0ba6afdf615533c6d174c24f3d76e5070b4436d09dbe07c9a6c595203d5470" exitCode=1 Jan 25 00:11:17 crc kubenswrapper[4947]: I0125 00:11:17.869383 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9fspn" event={"ID":"2d914454-2c17-47f2-aa53-aba3bfaad296","Type":"ContainerDied","Data":"6c0ba6afdf615533c6d174c24f3d76e5070b4436d09dbe07c9a6c595203d5470"} Jan 25 00:11:17 crc kubenswrapper[4947]: I0125 00:11:17.869451 4947 scope.go:117] "RemoveContainer" containerID="e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656" Jan 25 00:11:17 crc kubenswrapper[4947]: I0125 00:11:17.870245 4947 scope.go:117] "RemoveContainer" containerID="6c0ba6afdf615533c6d174c24f3d76e5070b4436d09dbe07c9a6c595203d5470" Jan 25 00:11:17 crc kubenswrapper[4947]: E0125 00:11:17.870645 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-9fspn_openshift-multus(2d914454-2c17-47f2-aa53-aba3bfaad296)\"" pod="openshift-multus/multus-9fspn" podUID="2d914454-2c17-47f2-aa53-aba3bfaad296" Jan 25 00:11:18 crc kubenswrapper[4947]: I0125 00:11:18.088877 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:18 crc kubenswrapper[4947]: I0125 00:11:18.088954 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:18 crc kubenswrapper[4947]: I0125 00:11:18.088877 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:18 crc kubenswrapper[4947]: E0125 00:11:18.089073 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:11:18 crc kubenswrapper[4947]: I0125 00:11:18.089120 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:18 crc kubenswrapper[4947]: E0125 00:11:18.089389 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:18 crc kubenswrapper[4947]: E0125 00:11:18.089551 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:11:18 crc kubenswrapper[4947]: E0125 00:11:18.089682 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:11:18 crc kubenswrapper[4947]: I0125 00:11:18.876323 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9fspn_2d914454-2c17-47f2-aa53-aba3bfaad296/kube-multus/1.log" Jan 25 00:11:20 crc kubenswrapper[4947]: I0125 00:11:20.089455 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:20 crc kubenswrapper[4947]: I0125 00:11:20.089533 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:20 crc kubenswrapper[4947]: I0125 00:11:20.089533 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:20 crc kubenswrapper[4947]: I0125 00:11:20.089641 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:20 crc kubenswrapper[4947]: E0125 00:11:20.089786 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:11:20 crc kubenswrapper[4947]: E0125 00:11:20.089630 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:11:20 crc kubenswrapper[4947]: E0125 00:11:20.089935 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:11:20 crc kubenswrapper[4947]: E0125 00:11:20.090011 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:21 crc kubenswrapper[4947]: E0125 00:11:21.123980 4947 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 25 00:11:21 crc kubenswrapper[4947]: E0125 00:11:21.184931 4947 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 25 00:11:22 crc kubenswrapper[4947]: I0125 00:11:22.090556 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:22 crc kubenswrapper[4947]: E0125 00:11:22.091360 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:11:22 crc kubenswrapper[4947]: I0125 00:11:22.090702 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:22 crc kubenswrapper[4947]: E0125 00:11:22.091505 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:22 crc kubenswrapper[4947]: I0125 00:11:22.090712 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:22 crc kubenswrapper[4947]: E0125 00:11:22.091714 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:11:22 crc kubenswrapper[4947]: I0125 00:11:22.090638 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:22 crc kubenswrapper[4947]: E0125 00:11:22.091806 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:11:24 crc kubenswrapper[4947]: I0125 00:11:24.089954 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:24 crc kubenswrapper[4947]: I0125 00:11:24.090011 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:24 crc kubenswrapper[4947]: I0125 00:11:24.090049 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:24 crc kubenswrapper[4947]: E0125 00:11:24.090291 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:24 crc kubenswrapper[4947]: I0125 00:11:24.090317 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:24 crc kubenswrapper[4947]: E0125 00:11:24.090563 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:11:24 crc kubenswrapper[4947]: E0125 00:11:24.091056 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:11:24 crc kubenswrapper[4947]: E0125 00:11:24.091203 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:11:24 crc kubenswrapper[4947]: I0125 00:11:24.091532 4947 scope.go:117] "RemoveContainer" containerID="a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e" Jan 25 00:11:24 crc kubenswrapper[4947]: I0125 00:11:24.918085 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovnkube-controller/3.log" Jan 25 00:11:24 crc kubenswrapper[4947]: I0125 00:11:24.921081 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerStarted","Data":"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c"} Jan 25 00:11:24 crc kubenswrapper[4947]: I0125 00:11:24.921855 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:11:24 crc kubenswrapper[4947]: I0125 00:11:24.956450 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" podStartSLOduration=103.956429305 podStartE2EDuration="1m43.956429305s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:24.955193055 +0000 UTC m=+124.188183545" watchObservedRunningTime="2026-01-25 00:11:24.956429305 +0000 UTC m=+124.189419775" Jan 25 00:11:25 crc kubenswrapper[4947]: I0125 00:11:25.132179 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hj7kb"] Jan 25 00:11:25 crc kubenswrapper[4947]: I0125 00:11:25.132311 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:25 crc kubenswrapper[4947]: E0125 00:11:25.132435 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:11:26 crc kubenswrapper[4947]: I0125 00:11:26.089279 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:26 crc kubenswrapper[4947]: I0125 00:11:26.089367 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:26 crc kubenswrapper[4947]: I0125 00:11:26.089430 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:26 crc kubenswrapper[4947]: E0125 00:11:26.089758 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:11:26 crc kubenswrapper[4947]: E0125 00:11:26.089954 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:26 crc kubenswrapper[4947]: E0125 00:11:26.090229 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:11:26 crc kubenswrapper[4947]: E0125 00:11:26.186604 4947 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 25 00:11:27 crc kubenswrapper[4947]: I0125 00:11:27.089343 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:27 crc kubenswrapper[4947]: E0125 00:11:27.089558 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:11:28 crc kubenswrapper[4947]: I0125 00:11:28.089425 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:28 crc kubenswrapper[4947]: E0125 00:11:28.089610 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:28 crc kubenswrapper[4947]: I0125 00:11:28.089874 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:28 crc kubenswrapper[4947]: E0125 00:11:28.089964 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:11:28 crc kubenswrapper[4947]: I0125 00:11:28.090164 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:28 crc kubenswrapper[4947]: E0125 00:11:28.090326 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:11:29 crc kubenswrapper[4947]: I0125 00:11:29.089560 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:29 crc kubenswrapper[4947]: E0125 00:11:29.089880 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:11:30 crc kubenswrapper[4947]: I0125 00:11:30.089047 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:30 crc kubenswrapper[4947]: I0125 00:11:30.089106 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:30 crc kubenswrapper[4947]: I0125 00:11:30.089210 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:30 crc kubenswrapper[4947]: E0125 00:11:30.089274 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:11:30 crc kubenswrapper[4947]: E0125 00:11:30.089415 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:11:30 crc kubenswrapper[4947]: E0125 00:11:30.089529 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:31 crc kubenswrapper[4947]: I0125 00:11:31.089484 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:31 crc kubenswrapper[4947]: E0125 00:11:31.091286 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:11:31 crc kubenswrapper[4947]: E0125 00:11:31.187379 4947 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 25 00:11:32 crc kubenswrapper[4947]: I0125 00:11:32.089352 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:32 crc kubenswrapper[4947]: I0125 00:11:32.089525 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:32 crc kubenswrapper[4947]: E0125 00:11:32.089638 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:32 crc kubenswrapper[4947]: I0125 00:11:32.089663 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:32 crc kubenswrapper[4947]: E0125 00:11:32.089851 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:11:32 crc kubenswrapper[4947]: E0125 00:11:32.090068 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:11:32 crc kubenswrapper[4947]: I0125 00:11:32.090454 4947 scope.go:117] "RemoveContainer" containerID="6c0ba6afdf615533c6d174c24f3d76e5070b4436d09dbe07c9a6c595203d5470" Jan 25 00:11:32 crc kubenswrapper[4947]: I0125 00:11:32.959631 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9fspn_2d914454-2c17-47f2-aa53-aba3bfaad296/kube-multus/1.log" Jan 25 00:11:32 crc kubenswrapper[4947]: I0125 00:11:32.960232 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9fspn" event={"ID":"2d914454-2c17-47f2-aa53-aba3bfaad296","Type":"ContainerStarted","Data":"c7e4bdd1edd4881050f59041c1d6812e03c6fe683d1104b228fa6e3da5f11032"} Jan 25 00:11:33 crc kubenswrapper[4947]: I0125 00:11:33.088818 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:33 crc kubenswrapper[4947]: E0125 00:11:33.088985 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:11:34 crc kubenswrapper[4947]: I0125 00:11:34.089724 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:34 crc kubenswrapper[4947]: I0125 00:11:34.089844 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:34 crc kubenswrapper[4947]: E0125 00:11:34.089941 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:34 crc kubenswrapper[4947]: E0125 00:11:34.090046 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:11:34 crc kubenswrapper[4947]: I0125 00:11:34.090213 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:34 crc kubenswrapper[4947]: E0125 00:11:34.090320 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:11:35 crc kubenswrapper[4947]: I0125 00:11:35.089012 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:35 crc kubenswrapper[4947]: E0125 00:11:35.089209 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.088976 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.088976 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.089090 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:36 crc kubenswrapper[4947]: E0125 00:11:36.089264 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:11:36 crc kubenswrapper[4947]: E0125 00:11:36.089415 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:11:36 crc kubenswrapper[4947]: E0125 00:11:36.089555 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.677110 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.739996 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-nt9qq"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.740630 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.745294 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.745648 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.745698 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.746022 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.746077 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.746107 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.751624 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dmsjj"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.752183 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.754658 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.757685 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.758665 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.758679 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.758780 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.758782 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.758874 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.758880 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.760840 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.760885 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.766045 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.766291 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.781057 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.788685 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.790738 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.791267 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.791359 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5nql4"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.791796 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.793849 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.794053 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7kcc9"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.794640 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.794671 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29488320-jf979"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.795052 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29488320-jf979" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.797781 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cmngb"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.798300 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cmngb" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.802422 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.803092 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25dw7"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.803688 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.803950 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25dw7" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.804643 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.805096 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.805573 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2plqs"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.806034 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.807358 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.807547 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.807714 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.807851 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.807924 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"serviceca" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.808022 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.808100 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.808206 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.808283 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.808334 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.808435 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.808469 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.808595 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.808623 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.808673 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.808747 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.808767 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.808842 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.808963 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.809092 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.809355 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.809530 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"pruner-dockercfg-p7bcw" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.809775 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.812312 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xwjmr"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.812952 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq4ph"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.813476 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-5zvdg"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.814009 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-5zvdg" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.814688 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-xwjmr" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.815472 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq4ph" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.816931 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.823090 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.823407 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.823624 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.823859 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.823980 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.824114 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.824325 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.824515 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.824735 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.824877 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.825357 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.825499 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.825571 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.825424 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.825964 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.826561 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.827205 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.827289 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-5nscb"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.827309 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.827813 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.828108 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.848663 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbbs4\" (UniqueName: \"kubernetes.io/projected/d3a733c1-a1cf-42ef-a056-27185292354f-kube-api-access-fbbs4\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.848708 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf8b174c-d1eb-4a2d-88c2-113302fa2300-service-ca-bundle\") pod \"authentication-operator-69f744f599-nt9qq\" (UID: \"bf8b174c-d1eb-4a2d-88c2-113302fa2300\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.848733 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-audit-policies\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.848767 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.848789 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf8b174c-d1eb-4a2d-88c2-113302fa2300-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-nt9qq\" (UID: \"bf8b174c-d1eb-4a2d-88c2-113302fa2300\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.848821 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.848839 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf8b174c-d1eb-4a2d-88c2-113302fa2300-config\") pod \"authentication-operator-69f744f599-nt9qq\" (UID: \"bf8b174c-d1eb-4a2d-88c2-113302fa2300\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.848860 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.848878 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.848897 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.848927 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.848964 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.848988 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm9wz\" (UniqueName: \"kubernetes.io/projected/bf8b174c-d1eb-4a2d-88c2-113302fa2300-kube-api-access-tm9wz\") pod \"authentication-operator-69f744f599-nt9qq\" (UID: \"bf8b174c-d1eb-4a2d-88c2-113302fa2300\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.849009 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d3a733c1-a1cf-42ef-a056-27185292354f-audit-dir\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.849030 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.849049 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf8b174c-d1eb-4a2d-88c2-113302fa2300-serving-cert\") pod \"authentication-operator-69f744f599-nt9qq\" (UID: \"bf8b174c-d1eb-4a2d-88c2-113302fa2300\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.849067 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.849089 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.849108 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.828771 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-h6jgn"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.850316 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.850894 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.872932 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-zjf9d"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.873534 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-zjf9d" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.873869 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-h6jgn" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.874077 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.874377 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-s2g88"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.875017 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2g88" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.876485 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.876584 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-spsw9"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.877524 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-95tmb"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.877881 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.878110 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-spsw9" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.891182 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.892117 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.892259 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.893090 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.895709 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vx9fn"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.896718 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.898454 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.903620 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.904434 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-cxmmw"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.904970 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-glppw"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.905450 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-glppw" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.905484 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cxmmw" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.905931 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tz9k4"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.906564 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tz9k4" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.906790 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qcsn"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.906823 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.907549 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qcsn" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.911014 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vwtw5"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.911608 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.911956 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.912238 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vwtw5" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.922715 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mprs4"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.925268 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.925573 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbnnc"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.926104 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.926707 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.926853 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbnnc" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.938202 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-n4bqp"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.938796 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-nt9qq"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.938885 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n4bqp" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.943593 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.943763 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lz644"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.944531 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2prvv"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.944869 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2prvv" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.946026 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.946168 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-lz644" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.951619 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/49c456f9-6cbf-4e3c-992a-8636357253ad-service-ca\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.951664 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4c613148-89dd-4904-b721-c90f6a0f89ba-trusted-ca\") pod \"console-operator-58897d9998-xwjmr\" (UID: \"4c613148-89dd-4904-b721-c90f6a0f89ba\") " pod="openshift-console-operator/console-operator-58897d9998-xwjmr" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.951695 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e8662e0-1de8-4371-8836-214a0394675c-config\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.951725 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.951754 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c456f9-6cbf-4e3c-992a-8636357253ad-console-serving-cert\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.951776 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cba929a-19da-479b-b9fb-b4cffaaba4c2-serving-cert\") pod \"openshift-config-operator-7777fb866f-5dh8t\" (UID: \"8cba929a-19da-479b-b9fb-b4cffaaba4c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.951794 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaa67d1d-92d7-41aa-b72f-aee9bca370fc-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-cmngb\" (UID: \"eaa67d1d-92d7-41aa-b72f-aee9bca370fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cmngb" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.951809 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4e8662e0-1de8-4371-8836-214a0394675c-node-pullsecrets\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.951829 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf8b174c-d1eb-4a2d-88c2-113302fa2300-serving-cert\") pod \"authentication-operator-69f744f599-nt9qq\" (UID: \"bf8b174c-d1eb-4a2d-88c2-113302fa2300\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.951854 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60ffdf66-0472-4e1f-9ea6-869acc338d0e-config\") pod \"kube-controller-manager-operator-78b949d7b-spsw9\" (UID: \"60ffdf66-0472-4e1f-9ea6-869acc338d0e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-spsw9" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.951875 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa67d1d-92d7-41aa-b72f-aee9bca370fc-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-cmngb\" (UID: \"eaa67d1d-92d7-41aa-b72f-aee9bca370fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cmngb" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.951890 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e8662e0-1de8-4371-8836-214a0394675c-serving-cert\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.951905 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf4jd\" (UniqueName: \"kubernetes.io/projected/2244349f-df5c-4813-a0e7-418a602f57b0-kube-api-access-mf4jd\") pod \"router-default-5444994796-5nscb\" (UID: \"2244349f-df5c-4813-a0e7-418a602f57b0\") " pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.951924 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6msz\" (UniqueName: \"kubernetes.io/projected/9e4f53a6-fcc3-4310-965d-9a5dda91080b-kube-api-access-k6msz\") pod \"image-pruner-29488320-jf979\" (UID: \"9e4f53a6-fcc3-4310-965d-9a5dda91080b\") " pod="openshift-image-registry/image-pruner-29488320-jf979" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.951940 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4e8662e0-1de8-4371-8836-214a0394675c-etcd-client\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.951960 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.951975 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/49c456f9-6cbf-4e3c-992a-8636357253ad-oauth-serving-cert\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.951992 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952008 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/2244349f-df5c-4813-a0e7-418a602f57b0-default-certificate\") pod \"router-default-5444994796-5nscb\" (UID: \"2244349f-df5c-4813-a0e7-418a602f57b0\") " pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952027 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952044 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/222d5540-6b86-404a-b787-ea6a6043206a-etcd-service-ca\") pod \"etcd-operator-b45778765-2plqs\" (UID: \"222d5540-6b86-404a-b787-ea6a6043206a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952059 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/2244349f-df5c-4813-a0e7-418a602f57b0-stats-auth\") pod \"router-default-5444994796-5nscb\" (UID: \"2244349f-df5c-4813-a0e7-418a602f57b0\") " pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952077 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37b7a00e-4def-4d1e-8333-94d15174223b-client-ca\") pod \"route-controller-manager-6576b87f9c-vw66z\" (UID: \"37b7a00e-4def-4d1e-8333-94d15174223b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952094 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbbs4\" (UniqueName: \"kubernetes.io/projected/d3a733c1-a1cf-42ef-a056-27185292354f-kube-api-access-fbbs4\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952109 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf8b174c-d1eb-4a2d-88c2-113302fa2300-service-ca-bundle\") pod \"authentication-operator-69f744f599-nt9qq\" (UID: \"bf8b174c-d1eb-4a2d-88c2-113302fa2300\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952144 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c613148-89dd-4904-b721-c90f6a0f89ba-serving-cert\") pod \"console-operator-58897d9998-xwjmr\" (UID: \"4c613148-89dd-4904-b721-c90f6a0f89ba\") " pod="openshift-console-operator/console-operator-58897d9998-xwjmr" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952160 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b50bda2b-e707-456e-af02-796b6d9a4cdf-metrics-tls\") pod \"dns-operator-744455d44c-zjf9d\" (UID: \"b50bda2b-e707-456e-af02-796b6d9a4cdf\") " pod="openshift-dns-operator/dns-operator-744455d44c-zjf9d" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952176 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37b7a00e-4def-4d1e-8333-94d15174223b-serving-cert\") pod \"route-controller-manager-6576b87f9c-vw66z\" (UID: \"37b7a00e-4def-4d1e-8333-94d15174223b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952191 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqkd6\" (UniqueName: \"kubernetes.io/projected/4c613148-89dd-4904-b721-c90f6a0f89ba-kube-api-access-qqkd6\") pod \"console-operator-58897d9998-xwjmr\" (UID: \"4c613148-89dd-4904-b721-c90f6a0f89ba\") " pod="openshift-console-operator/console-operator-58897d9998-xwjmr" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952210 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-audit-policies\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952226 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhc9z\" (UniqueName: \"kubernetes.io/projected/b8f2f610-05dc-49ea-882e-634d283b3caa-kube-api-access-dhc9z\") pod \"machine-api-operator-5694c8668f-h6jgn\" (UID: \"b8f2f610-05dc-49ea-882e-634d283b3caa\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h6jgn" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952244 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsx9g\" (UniqueName: \"kubernetes.io/projected/9d15a018-8297-4e45-8a62-afa89a267381-kube-api-access-zsx9g\") pod \"controller-manager-879f6c89f-5nql4\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952258 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/222d5540-6b86-404a-b787-ea6a6043206a-serving-cert\") pod \"etcd-operator-b45778765-2plqs\" (UID: \"222d5540-6b86-404a-b787-ea6a6043206a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952273 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4qnc\" (UniqueName: \"kubernetes.io/projected/222d5540-6b86-404a-b787-ea6a6043206a-kube-api-access-s4qnc\") pod \"etcd-operator-b45778765-2plqs\" (UID: \"222d5540-6b86-404a-b787-ea6a6043206a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952296 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/49c456f9-6cbf-4e3c-992a-8636357253ad-console-config\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952311 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mtfs\" (UniqueName: \"kubernetes.io/projected/eaa67d1d-92d7-41aa-b72f-aee9bca370fc-kube-api-access-9mtfs\") pod \"openshift-controller-manager-operator-756b6f6bc6-cmngb\" (UID: \"eaa67d1d-92d7-41aa-b72f-aee9bca370fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cmngb" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952328 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b5f3c960-a56e-4c0e-82da-c8a39167eb8b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-qxqhz\" (UID: \"b5f3c960-a56e-4c0e-82da-c8a39167eb8b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952345 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b5f3c960-a56e-4c0e-82da-c8a39167eb8b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-qxqhz\" (UID: \"b5f3c960-a56e-4c0e-82da-c8a39167eb8b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952380 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952396 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf8b174c-d1eb-4a2d-88c2-113302fa2300-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-nt9qq\" (UID: \"bf8b174c-d1eb-4a2d-88c2-113302fa2300\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952412 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d15a018-8297-4e45-8a62-afa89a267381-config\") pod \"controller-manager-879f6c89f-5nql4\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952426 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b7a00e-4def-4d1e-8333-94d15174223b-config\") pod \"route-controller-manager-6576b87f9c-vw66z\" (UID: \"37b7a00e-4def-4d1e-8333-94d15174223b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952442 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4e8662e0-1de8-4371-8836-214a0394675c-etcd-serving-ca\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952464 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/90a4381e-451b-4940-932a-efba1d101c81-machine-approver-tls\") pod \"machine-approver-56656f9798-s2g88\" (UID: \"90a4381e-451b-4940-932a-efba1d101c81\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2g88" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952482 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf8b174c-d1eb-4a2d-88c2-113302fa2300-config\") pod \"authentication-operator-69f744f599-nt9qq\" (UID: \"bf8b174c-d1eb-4a2d-88c2-113302fa2300\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952496 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e8662e0-1de8-4371-8836-214a0394675c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952510 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b5f3c960-a56e-4c0e-82da-c8a39167eb8b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-qxqhz\" (UID: \"b5f3c960-a56e-4c0e-82da-c8a39167eb8b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952526 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952541 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c456f9-6cbf-4e3c-992a-8636357253ad-trusted-ca-bundle\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952556 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8f2f610-05dc-49ea-882e-634d283b3caa-config\") pod \"machine-api-operator-5694c8668f-h6jgn\" (UID: \"b8f2f610-05dc-49ea-882e-634d283b3caa\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h6jgn" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952569 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9e4f53a6-fcc3-4310-965d-9a5dda91080b-serviceca\") pod \"image-pruner-29488320-jf979\" (UID: \"9e4f53a6-fcc3-4310-965d-9a5dda91080b\") " pod="openshift-image-registry/image-pruner-29488320-jf979" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952585 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952601 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6xwl\" (UniqueName: \"kubernetes.io/projected/0e97ae5e-35ab-41e9-aa03-ad060bbbd676-kube-api-access-z6xwl\") pod \"downloads-7954f5f757-5zvdg\" (UID: \"0e97ae5e-35ab-41e9-aa03-ad060bbbd676\") " pod="openshift-console/downloads-7954f5f757-5zvdg" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952615 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/222d5540-6b86-404a-b787-ea6a6043206a-etcd-ca\") pod \"etcd-operator-b45778765-2plqs\" (UID: \"222d5540-6b86-404a-b787-ea6a6043206a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952632 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmpf6\" (UniqueName: \"kubernetes.io/projected/8cba929a-19da-479b-b9fb-b4cffaaba4c2-kube-api-access-vmpf6\") pod \"openshift-config-operator-7777fb866f-5dh8t\" (UID: \"8cba929a-19da-479b-b9fb-b4cffaaba4c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952647 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4e8662e0-1de8-4371-8836-214a0394675c-encryption-config\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952661 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xtn8\" (UniqueName: \"kubernetes.io/projected/b5f3c960-a56e-4c0e-82da-c8a39167eb8b-kube-api-access-2xtn8\") pod \"cluster-image-registry-operator-dc59b4c8b-qxqhz\" (UID: \"b5f3c960-a56e-4c0e-82da-c8a39167eb8b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952675 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2244349f-df5c-4813-a0e7-418a602f57b0-metrics-certs\") pod \"router-default-5444994796-5nscb\" (UID: \"2244349f-df5c-4813-a0e7-418a602f57b0\") " pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952691 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952705 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60ffdf66-0472-4e1f-9ea6-869acc338d0e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-spsw9\" (UID: \"60ffdf66-0472-4e1f-9ea6-869acc338d0e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-spsw9" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952720 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4e8662e0-1de8-4371-8836-214a0394675c-image-import-ca\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952735 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvf82\" (UniqueName: \"kubernetes.io/projected/4e8662e0-1de8-4371-8836-214a0394675c-kube-api-access-lvf82\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952749 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmh4c\" (UniqueName: \"kubernetes.io/projected/90a4381e-451b-4940-932a-efba1d101c81-kube-api-access-bmh4c\") pod \"machine-approver-56656f9798-s2g88\" (UID: \"90a4381e-451b-4940-932a-efba1d101c81\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2g88" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952763 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c613148-89dd-4904-b721-c90f6a0f89ba-config\") pod \"console-operator-58897d9998-xwjmr\" (UID: \"4c613148-89dd-4904-b721-c90f6a0f89ba\") " pod="openshift-console-operator/console-operator-58897d9998-xwjmr" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952778 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16240ac3-819b-4e68-bca9-c97c94599fbb-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tq4ph\" (UID: \"16240ac3-819b-4e68-bca9-c97c94599fbb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq4ph" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952793 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/79a96518-940a-4490-9067-9e2f873753f7-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-25dw7\" (UID: \"79a96518-940a-4490-9067-9e2f873753f7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25dw7" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952807 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4p7b\" (UniqueName: \"kubernetes.io/projected/79a96518-940a-4490-9067-9e2f873753f7-kube-api-access-l4p7b\") pod \"cluster-samples-operator-665b6dd947-25dw7\" (UID: \"79a96518-940a-4490-9067-9e2f873753f7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25dw7" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952822 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952844 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2244349f-df5c-4813-a0e7-418a602f57b0-service-ca-bundle\") pod \"router-default-5444994796-5nscb\" (UID: \"2244349f-df5c-4813-a0e7-418a602f57b0\") " pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952861 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952875 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8cba929a-19da-479b-b9fb-b4cffaaba4c2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5dh8t\" (UID: \"8cba929a-19da-479b-b9fb-b4cffaaba4c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952891 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9twfs\" (UniqueName: \"kubernetes.io/projected/b50bda2b-e707-456e-af02-796b6d9a4cdf-kube-api-access-9twfs\") pod \"dns-operator-744455d44c-zjf9d\" (UID: \"b50bda2b-e707-456e-af02-796b6d9a4cdf\") " pod="openshift-dns-operator/dns-operator-744455d44c-zjf9d" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952908 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njhj5\" (UniqueName: \"kubernetes.io/projected/16240ac3-819b-4e68-bca9-c97c94599fbb-kube-api-access-njhj5\") pod \"openshift-apiserver-operator-796bbdcf4f-tq4ph\" (UID: \"16240ac3-819b-4e68-bca9-c97c94599fbb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq4ph" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952923 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b8f2f610-05dc-49ea-882e-634d283b3caa-images\") pod \"machine-api-operator-5694c8668f-h6jgn\" (UID: \"b8f2f610-05dc-49ea-882e-634d283b3caa\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h6jgn" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952940 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/49c456f9-6cbf-4e3c-992a-8636357253ad-console-oauth-config\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952955 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d15a018-8297-4e45-8a62-afa89a267381-serving-cert\") pod \"controller-manager-879f6c89f-5nql4\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952970 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9d15a018-8297-4e45-8a62-afa89a267381-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5nql4\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952984 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/222d5540-6b86-404a-b787-ea6a6043206a-etcd-client\") pod \"etcd-operator-b45778765-2plqs\" (UID: \"222d5540-6b86-404a-b787-ea6a6043206a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.953013 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d15a018-8297-4e45-8a62-afa89a267381-client-ca\") pod \"controller-manager-879f6c89f-5nql4\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.953027 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b8f2f610-05dc-49ea-882e-634d283b3caa-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-h6jgn\" (UID: \"b8f2f610-05dc-49ea-882e-634d283b3caa\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h6jgn" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.953042 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60ffdf66-0472-4e1f-9ea6-869acc338d0e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-spsw9\" (UID: \"60ffdf66-0472-4e1f-9ea6-869acc338d0e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-spsw9" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.953058 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x956c\" (UniqueName: \"kubernetes.io/projected/37b7a00e-4def-4d1e-8333-94d15174223b-kube-api-access-x956c\") pod \"route-controller-manager-6576b87f9c-vw66z\" (UID: \"37b7a00e-4def-4d1e-8333-94d15174223b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.953072 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/222d5540-6b86-404a-b787-ea6a6043206a-config\") pod \"etcd-operator-b45778765-2plqs\" (UID: \"222d5540-6b86-404a-b787-ea6a6043206a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.953088 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.953104 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm9wz\" (UniqueName: \"kubernetes.io/projected/bf8b174c-d1eb-4a2d-88c2-113302fa2300-kube-api-access-tm9wz\") pod \"authentication-operator-69f744f599-nt9qq\" (UID: \"bf8b174c-d1eb-4a2d-88c2-113302fa2300\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.953118 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16240ac3-819b-4e68-bca9-c97c94599fbb-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tq4ph\" (UID: \"16240ac3-819b-4e68-bca9-c97c94599fbb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq4ph" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.953147 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/90a4381e-451b-4940-932a-efba1d101c81-auth-proxy-config\") pod \"machine-approver-56656f9798-s2g88\" (UID: \"90a4381e-451b-4940-932a-efba1d101c81\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2g88" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.953164 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4jhb\" (UniqueName: \"kubernetes.io/projected/49c456f9-6cbf-4e3c-992a-8636357253ad-kube-api-access-l4jhb\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.953179 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90a4381e-451b-4940-932a-efba1d101c81-config\") pod \"machine-approver-56656f9798-s2g88\" (UID: \"90a4381e-451b-4940-932a-efba1d101c81\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2g88" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.953194 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d3a733c1-a1cf-42ef-a056-27185292354f-audit-dir\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.953209 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4e8662e0-1de8-4371-8836-214a0394675c-audit\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.953224 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4e8662e0-1de8-4371-8836-214a0394675c-audit-dir\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.956785 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.957858 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.963749 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.967337 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.968166 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.980866 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.981705 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.984373 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.984409 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf8b174c-d1eb-4a2d-88c2-113302fa2300-serving-cert\") pod \"authentication-operator-69f744f599-nt9qq\" (UID: \"bf8b174c-d1eb-4a2d-88c2-113302fa2300\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.995543 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d3a733c1-a1cf-42ef-a056-27185292354f-audit-dir\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.998243 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dmsjj"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.998283 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-k7fhc"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.998949 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.000572 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf8b174c-d1eb-4a2d-88c2-113302fa2300-config\") pod \"authentication-operator-69f744f599-nt9qq\" (UID: \"bf8b174c-d1eb-4a2d-88c2-113302fa2300\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.001297 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.001577 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.001703 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.002394 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf8b174c-d1eb-4a2d-88c2-113302fa2300-service-ca-bundle\") pod \"authentication-operator-69f744f599-nt9qq\" (UID: \"bf8b174c-d1eb-4a2d-88c2-113302fa2300\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.005968 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.004419 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-audit-policies\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.008735 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5nql4"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.008760 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-qgqfk"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.009700 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cmngb"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.009932 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.010247 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-k7fhc" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.010506 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.013274 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.013549 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.013885 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.014197 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.014455 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.014630 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.014718 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.014847 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.015399 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.015773 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.017244 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qgqfk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.017446 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.025716 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.026992 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf8b174c-d1eb-4a2d-88c2-113302fa2300-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-nt9qq\" (UID: \"bf8b174c-d1eb-4a2d-88c2-113302fa2300\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.027227 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.027486 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.027600 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.027711 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.027842 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.027900 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.028145 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.028402 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.028524 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.029083 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.029789 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.031599 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.031751 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.031854 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.033296 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.033469 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.033415 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.033693 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.033739 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.033896 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.033931 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.034003 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.034055 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.034145 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.034175 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.036492 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.038850 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.038971 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.040586 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29488320-jf979"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.041921 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.042930 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7kcc9"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.045838 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2plqs"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.045867 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25dw7"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.046462 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.047670 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.048213 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.049505 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xwjmr"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.052735 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lz644"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.052826 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq4ph"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.055175 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.055749 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d15a018-8297-4e45-8a62-afa89a267381-serving-cert\") pod \"controller-manager-879f6c89f-5nql4\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.055788 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e8ad493-9466-46d8-8307-13f24463f184-serving-cert\") pod \"service-ca-operator-777779d784-n4bqp\" (UID: \"0e8ad493-9466-46d8-8307-13f24463f184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n4bqp" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.055816 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9p5g\" (UniqueName: \"kubernetes.io/projected/633ed9d6-a859-48e4-a100-f8e5bb3d6cfd-kube-api-access-j9p5g\") pod \"machine-config-operator-74547568cd-cms5v\" (UID: \"633ed9d6-a859-48e4-a100-f8e5bb3d6cfd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.055845 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b8f2f610-05dc-49ea-882e-634d283b3caa-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-h6jgn\" (UID: \"b8f2f610-05dc-49ea-882e-634d283b3caa\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h6jgn" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.055870 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16240ac3-819b-4e68-bca9-c97c94599fbb-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tq4ph\" (UID: \"16240ac3-819b-4e68-bca9-c97c94599fbb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq4ph" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.055889 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/90a4381e-451b-4940-932a-efba1d101c81-auth-proxy-config\") pod \"machine-approver-56656f9798-s2g88\" (UID: \"90a4381e-451b-4940-932a-efba1d101c81\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2g88" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.055907 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90a4381e-451b-4940-932a-efba1d101c81-config\") pod \"machine-approver-56656f9798-s2g88\" (UID: \"90a4381e-451b-4940-932a-efba1d101c81\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2g88" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.055924 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c04cc1eb-ec23-4876-afd1-f123c04cdc8a-signing-cabundle\") pod \"service-ca-9c57cc56f-lz644\" (UID: \"c04cc1eb-ec23-4876-afd1-f123c04cdc8a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lz644" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.055946 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7brwl\" (UniqueName: \"kubernetes.io/projected/d41f4b03-bff1-4ba1-a54b-b3e78014ecb8-kube-api-access-7brwl\") pod \"ingress-operator-5b745b69d9-tsj7s\" (UID: \"d41f4b03-bff1-4ba1-a54b-b3e78014ecb8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.055967 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e8f132b-916b-4973-9873-5919cb12251c-config\") pod \"kube-apiserver-operator-766d6c64bb-glppw\" (UID: \"1e8f132b-916b-4973-9873-5919cb12251c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-glppw" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.055988 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e8f132b-916b-4973-9873-5919cb12251c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-glppw\" (UID: \"1e8f132b-916b-4973-9873-5919cb12251c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-glppw" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056009 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4c613148-89dd-4904-b721-c90f6a0f89ba-trusted-ca\") pod \"console-operator-58897d9998-xwjmr\" (UID: \"4c613148-89dd-4904-b721-c90f6a0f89ba\") " pod="openshift-console-operator/console-operator-58897d9998-xwjmr" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056027 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e8662e0-1de8-4371-8836-214a0394675c-config\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056046 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cba929a-19da-479b-b9fb-b4cffaaba4c2-serving-cert\") pod \"openshift-config-operator-7777fb866f-5dh8t\" (UID: \"8cba929a-19da-479b-b9fb-b4cffaaba4c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056065 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0fffe8f2-59b1-4215-809e-461bc8f5e386-audit-dir\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056084 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaa67d1d-92d7-41aa-b72f-aee9bca370fc-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-cmngb\" (UID: \"eaa67d1d-92d7-41aa-b72f-aee9bca370fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cmngb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056101 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4e8662e0-1de8-4371-8836-214a0394675c-node-pullsecrets\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056135 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ec7126b-b0f9-4fff-a11f-76726ce4c4ff-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2prvv\" (UID: \"4ec7126b-b0f9-4fff-a11f-76726ce4c4ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2prvv" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056153 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa67d1d-92d7-41aa-b72f-aee9bca370fc-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-cmngb\" (UID: \"eaa67d1d-92d7-41aa-b72f-aee9bca370fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cmngb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056170 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx5zf\" (UniqueName: \"kubernetes.io/projected/17242dc8-e334-406d-ad0a-5dc9ecdf0d6a-kube-api-access-jx5zf\") pod \"packageserver-d55dfcdfc-bg9x9\" (UID: \"17242dc8-e334-406d-ad0a-5dc9ecdf0d6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056188 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6msz\" (UniqueName: \"kubernetes.io/projected/9e4f53a6-fcc3-4310-965d-9a5dda91080b-kube-api-access-k6msz\") pod \"image-pruner-29488320-jf979\" (UID: \"9e4f53a6-fcc3-4310-965d-9a5dda91080b\") " pod="openshift-image-registry/image-pruner-29488320-jf979" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056208 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e8662e0-1de8-4371-8836-214a0394675c-serving-cert\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056230 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ec7126b-b0f9-4fff-a11f-76726ce4c4ff-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2prvv\" (UID: \"4ec7126b-b0f9-4fff-a11f-76726ce4c4ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2prvv" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056248 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4e8662e0-1de8-4371-8836-214a0394675c-etcd-client\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056267 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7e230409-6e68-4f7c-b0c3-3e55433b22c1-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-tbnnc\" (UID: \"7e230409-6e68-4f7c-b0c3-3e55433b22c1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbnnc" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056296 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/222d5540-6b86-404a-b787-ea6a6043206a-etcd-service-ca\") pod \"etcd-operator-b45778765-2plqs\" (UID: \"222d5540-6b86-404a-b787-ea6a6043206a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056317 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/2244349f-df5c-4813-a0e7-418a602f57b0-stats-auth\") pod \"router-default-5444994796-5nscb\" (UID: \"2244349f-df5c-4813-a0e7-418a602f57b0\") " pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056341 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37b7a00e-4def-4d1e-8333-94d15174223b-serving-cert\") pod \"route-controller-manager-6576b87f9c-vw66z\" (UID: \"37b7a00e-4def-4d1e-8333-94d15174223b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056382 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsx9g\" (UniqueName: \"kubernetes.io/projected/9d15a018-8297-4e45-8a62-afa89a267381-kube-api-access-zsx9g\") pod \"controller-manager-879f6c89f-5nql4\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056444 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/222d5540-6b86-404a-b787-ea6a6043206a-serving-cert\") pod \"etcd-operator-b45778765-2plqs\" (UID: \"222d5540-6b86-404a-b787-ea6a6043206a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056461 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4qnc\" (UniqueName: \"kubernetes.io/projected/222d5540-6b86-404a-b787-ea6a6043206a-kube-api-access-s4qnc\") pod \"etcd-operator-b45778765-2plqs\" (UID: \"222d5540-6b86-404a-b787-ea6a6043206a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056479 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mtfs\" (UniqueName: \"kubernetes.io/projected/eaa67d1d-92d7-41aa-b72f-aee9bca370fc-kube-api-access-9mtfs\") pod \"openshift-controller-manager-operator-756b6f6bc6-cmngb\" (UID: \"eaa67d1d-92d7-41aa-b72f-aee9bca370fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cmngb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056502 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0fffe8f2-59b1-4215-809e-461bc8f5e386-audit-policies\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056522 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0fffe8f2-59b1-4215-809e-461bc8f5e386-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056542 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgmn4\" (UniqueName: \"kubernetes.io/projected/c04cc1eb-ec23-4876-afd1-f123c04cdc8a-kube-api-access-dgmn4\") pod \"service-ca-9c57cc56f-lz644\" (UID: \"c04cc1eb-ec23-4876-afd1-f123c04cdc8a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lz644" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056560 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d15a018-8297-4e45-8a62-afa89a267381-config\") pod \"controller-manager-879f6c89f-5nql4\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056578 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b7a00e-4def-4d1e-8333-94d15174223b-config\") pod \"route-controller-manager-6576b87f9c-vw66z\" (UID: \"37b7a00e-4def-4d1e-8333-94d15174223b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056598 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzfhp\" (UniqueName: \"kubernetes.io/projected/0e8ad493-9466-46d8-8307-13f24463f184-kube-api-access-pzfhp\") pod \"service-ca-operator-777779d784-n4bqp\" (UID: \"0e8ad493-9466-46d8-8307-13f24463f184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n4bqp" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056615 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/633ed9d6-a859-48e4-a100-f8e5bb3d6cfd-images\") pod \"machine-config-operator-74547568cd-cms5v\" (UID: \"633ed9d6-a859-48e4-a100-f8e5bb3d6cfd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056633 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/633ed9d6-a859-48e4-a100-f8e5bb3d6cfd-proxy-tls\") pod \"machine-config-operator-74547568cd-cms5v\" (UID: \"633ed9d6-a859-48e4-a100-f8e5bb3d6cfd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056666 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/17242dc8-e334-406d-ad0a-5dc9ecdf0d6a-tmpfs\") pod \"packageserver-d55dfcdfc-bg9x9\" (UID: \"17242dc8-e334-406d-ad0a-5dc9ecdf0d6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056692 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/17242dc8-e334-406d-ad0a-5dc9ecdf0d6a-apiservice-cert\") pod \"packageserver-d55dfcdfc-bg9x9\" (UID: \"17242dc8-e334-406d-ad0a-5dc9ecdf0d6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056721 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e8662e0-1de8-4371-8836-214a0394675c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056750 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6xwl\" (UniqueName: \"kubernetes.io/projected/0e97ae5e-35ab-41e9-aa03-ad060bbbd676-kube-api-access-z6xwl\") pod \"downloads-7954f5f757-5zvdg\" (UID: \"0e97ae5e-35ab-41e9-aa03-ad060bbbd676\") " pod="openshift-console/downloads-7954f5f757-5zvdg" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056777 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c456f9-6cbf-4e3c-992a-8636357253ad-trusted-ca-bundle\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056800 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4e8662e0-1de8-4371-8836-214a0394675c-encryption-config\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056827 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvf82\" (UniqueName: \"kubernetes.io/projected/4e8662e0-1de8-4371-8836-214a0394675c-kube-api-access-lvf82\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056851 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16240ac3-819b-4e68-bca9-c97c94599fbb-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tq4ph\" (UID: \"16240ac3-819b-4e68-bca9-c97c94599fbb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq4ph" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056876 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmh4c\" (UniqueName: \"kubernetes.io/projected/90a4381e-451b-4940-932a-efba1d101c81-kube-api-access-bmh4c\") pod \"machine-approver-56656f9798-s2g88\" (UID: \"90a4381e-451b-4940-932a-efba1d101c81\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2g88" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056904 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgr5k\" (UniqueName: \"kubernetes.io/projected/7e230409-6e68-4f7c-b0c3-3e55433b22c1-kube-api-access-fgr5k\") pod \"package-server-manager-789f6589d5-tbnnc\" (UID: \"7e230409-6e68-4f7c-b0c3-3e55433b22c1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbnnc" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056932 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8cba929a-19da-479b-b9fb-b4cffaaba4c2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5dh8t\" (UID: \"8cba929a-19da-479b-b9fb-b4cffaaba4c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056951 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9twfs\" (UniqueName: \"kubernetes.io/projected/b50bda2b-e707-456e-af02-796b6d9a4cdf-kube-api-access-9twfs\") pod \"dns-operator-744455d44c-zjf9d\" (UID: \"b50bda2b-e707-456e-af02-796b6d9a4cdf\") " pod="openshift-dns-operator/dns-operator-744455d44c-zjf9d" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056990 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2244349f-df5c-4813-a0e7-418a602f57b0-service-ca-bundle\") pod \"router-default-5444994796-5nscb\" (UID: \"2244349f-df5c-4813-a0e7-418a602f57b0\") " pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057019 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d41f4b03-bff1-4ba1-a54b-b3e78014ecb8-metrics-tls\") pod \"ingress-operator-5b745b69d9-tsj7s\" (UID: \"d41f4b03-bff1-4ba1-a54b-b3e78014ecb8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057042 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d41f4b03-bff1-4ba1-a54b-b3e78014ecb8-bound-sa-token\") pod \"ingress-operator-5b745b69d9-tsj7s\" (UID: \"d41f4b03-bff1-4ba1-a54b-b3e78014ecb8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057061 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e8f132b-916b-4973-9873-5919cb12251c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-glppw\" (UID: \"1e8f132b-916b-4973-9873-5919cb12251c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-glppw" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057081 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/49c456f9-6cbf-4e3c-992a-8636357253ad-console-oauth-config\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057099 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0fffe8f2-59b1-4215-809e-461bc8f5e386-encryption-config\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057119 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9d15a018-8297-4e45-8a62-afa89a267381-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5nql4\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057201 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/222d5540-6b86-404a-b787-ea6a6043206a-etcd-client\") pod \"etcd-operator-b45778765-2plqs\" (UID: \"222d5540-6b86-404a-b787-ea6a6043206a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057223 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/17242dc8-e334-406d-ad0a-5dc9ecdf0d6a-webhook-cert\") pod \"packageserver-d55dfcdfc-bg9x9\" (UID: \"17242dc8-e334-406d-ad0a-5dc9ecdf0d6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057240 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/633ed9d6-a859-48e4-a100-f8e5bb3d6cfd-auth-proxy-config\") pod \"machine-config-operator-74547568cd-cms5v\" (UID: \"633ed9d6-a859-48e4-a100-f8e5bb3d6cfd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057272 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d15a018-8297-4e45-8a62-afa89a267381-client-ca\") pod \"controller-manager-879f6c89f-5nql4\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057296 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60ffdf66-0472-4e1f-9ea6-869acc338d0e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-spsw9\" (UID: \"60ffdf66-0472-4e1f-9ea6-869acc338d0e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-spsw9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057319 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x956c\" (UniqueName: \"kubernetes.io/projected/37b7a00e-4def-4d1e-8333-94d15174223b-kube-api-access-x956c\") pod \"route-controller-manager-6576b87f9c-vw66z\" (UID: \"37b7a00e-4def-4d1e-8333-94d15174223b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057337 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/222d5540-6b86-404a-b787-ea6a6043206a-config\") pod \"etcd-operator-b45778765-2plqs\" (UID: \"222d5540-6b86-404a-b787-ea6a6043206a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057358 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4jhb\" (UniqueName: \"kubernetes.io/projected/49c456f9-6cbf-4e3c-992a-8636357253ad-kube-api-access-l4jhb\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057376 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6a491f6-3829-4c9d-88cb-a49864576106-srv-cert\") pod \"olm-operator-6b444d44fb-nkrs8\" (UID: \"b6a491f6-3829-4c9d-88cb-a49864576106\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057387 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4c613148-89dd-4904-b721-c90f6a0f89ba-trusted-ca\") pod \"console-operator-58897d9998-xwjmr\" (UID: \"4c613148-89dd-4904-b721-c90f6a0f89ba\") " pod="openshift-console-operator/console-operator-58897d9998-xwjmr" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057396 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4e8662e0-1de8-4371-8836-214a0394675c-audit\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057495 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4e8662e0-1de8-4371-8836-214a0394675c-audit-dir\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057523 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/49c456f9-6cbf-4e3c-992a-8636357253ad-service-ca\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057552 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql48r\" (UniqueName: \"kubernetes.io/projected/b6a491f6-3829-4c9d-88cb-a49864576106-kube-api-access-ql48r\") pod \"olm-operator-6b444d44fb-nkrs8\" (UID: \"b6a491f6-3829-4c9d-88cb-a49864576106\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057584 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c456f9-6cbf-4e3c-992a-8636357253ad-console-serving-cert\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057605 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qzff\" (UniqueName: \"kubernetes.io/projected/f56c1338-08c8-47de-b24a-3aaf85e315f8-kube-api-access-5qzff\") pod \"control-plane-machine-set-operator-78cbb6b69f-vwtw5\" (UID: \"f56c1338-08c8-47de-b24a-3aaf85e315f8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vwtw5" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057629 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60ffdf66-0472-4e1f-9ea6-869acc338d0e-config\") pod \"kube-controller-manager-operator-78b949d7b-spsw9\" (UID: \"60ffdf66-0472-4e1f-9ea6-869acc338d0e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-spsw9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057662 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf4jd\" (UniqueName: \"kubernetes.io/projected/2244349f-df5c-4813-a0e7-418a602f57b0-kube-api-access-mf4jd\") pod \"router-default-5444994796-5nscb\" (UID: \"2244349f-df5c-4813-a0e7-418a602f57b0\") " pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057682 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f56c1338-08c8-47de-b24a-3aaf85e315f8-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-vwtw5\" (UID: \"f56c1338-08c8-47de-b24a-3aaf85e315f8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vwtw5" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057703 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/49c456f9-6cbf-4e3c-992a-8636357253ad-oauth-serving-cert\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057721 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c04cc1eb-ec23-4876-afd1-f123c04cdc8a-signing-key\") pod \"service-ca-9c57cc56f-lz644\" (UID: \"c04cc1eb-ec23-4876-afd1-f123c04cdc8a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lz644" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057744 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/2244349f-df5c-4813-a0e7-418a602f57b0-default-certificate\") pod \"router-default-5444994796-5nscb\" (UID: \"2244349f-df5c-4813-a0e7-418a602f57b0\") " pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057774 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37b7a00e-4def-4d1e-8333-94d15174223b-client-ca\") pod \"route-controller-manager-6576b87f9c-vw66z\" (UID: \"37b7a00e-4def-4d1e-8333-94d15174223b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057791 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d41f4b03-bff1-4ba1-a54b-b3e78014ecb8-trusted-ca\") pod \"ingress-operator-5b745b69d9-tsj7s\" (UID: \"d41f4b03-bff1-4ba1-a54b-b3e78014ecb8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057812 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c613148-89dd-4904-b721-c90f6a0f89ba-serving-cert\") pod \"console-operator-58897d9998-xwjmr\" (UID: \"4c613148-89dd-4904-b721-c90f6a0f89ba\") " pod="openshift-console-operator/console-operator-58897d9998-xwjmr" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057830 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b50bda2b-e707-456e-af02-796b6d9a4cdf-metrics-tls\") pod \"dns-operator-744455d44c-zjf9d\" (UID: \"b50bda2b-e707-456e-af02-796b6d9a4cdf\") " pod="openshift-dns-operator/dns-operator-744455d44c-zjf9d" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057849 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqkd6\" (UniqueName: \"kubernetes.io/projected/4c613148-89dd-4904-b721-c90f6a0f89ba-kube-api-access-qqkd6\") pod \"console-operator-58897d9998-xwjmr\" (UID: \"4c613148-89dd-4904-b721-c90f6a0f89ba\") " pod="openshift-console-operator/console-operator-58897d9998-xwjmr" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057867 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fffe8f2-59b1-4215-809e-461bc8f5e386-serving-cert\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057881 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e8662e0-1de8-4371-8836-214a0394675c-config\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057890 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhc9z\" (UniqueName: \"kubernetes.io/projected/b8f2f610-05dc-49ea-882e-634d283b3caa-kube-api-access-dhc9z\") pod \"machine-api-operator-5694c8668f-h6jgn\" (UID: \"b8f2f610-05dc-49ea-882e-634d283b3caa\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h6jgn" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057926 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/49c456f9-6cbf-4e3c-992a-8636357253ad-console-config\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057946 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b5f3c960-a56e-4c0e-82da-c8a39167eb8b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-qxqhz\" (UID: \"b5f3c960-a56e-4c0e-82da-c8a39167eb8b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057964 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b5f3c960-a56e-4c0e-82da-c8a39167eb8b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-qxqhz\" (UID: \"b5f3c960-a56e-4c0e-82da-c8a39167eb8b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057983 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0fffe8f2-59b1-4215-809e-461bc8f5e386-etcd-client\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058004 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6a491f6-3829-4c9d-88cb-a49864576106-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nkrs8\" (UID: \"b6a491f6-3829-4c9d-88cb-a49864576106\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058025 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4e8662e0-1de8-4371-8836-214a0394675c-etcd-serving-ca\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058042 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/90a4381e-451b-4940-932a-efba1d101c81-machine-approver-tls\") pod \"machine-approver-56656f9798-s2g88\" (UID: \"90a4381e-451b-4940-932a-efba1d101c81\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2g88" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058065 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b5f3c960-a56e-4c0e-82da-c8a39167eb8b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-qxqhz\" (UID: \"b5f3c960-a56e-4c0e-82da-c8a39167eb8b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058084 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpn25\" (UniqueName: \"kubernetes.io/projected/0fffe8f2-59b1-4215-809e-461bc8f5e386-kube-api-access-xpn25\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058111 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8f2f610-05dc-49ea-882e-634d283b3caa-config\") pod \"machine-api-operator-5694c8668f-h6jgn\" (UID: \"b8f2f610-05dc-49ea-882e-634d283b3caa\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h6jgn" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058143 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9e4f53a6-fcc3-4310-965d-9a5dda91080b-serviceca\") pod \"image-pruner-29488320-jf979\" (UID: \"9e4f53a6-fcc3-4310-965d-9a5dda91080b\") " pod="openshift-image-registry/image-pruner-29488320-jf979" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058165 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/222d5540-6b86-404a-b787-ea6a6043206a-etcd-ca\") pod \"etcd-operator-b45778765-2plqs\" (UID: \"222d5540-6b86-404a-b787-ea6a6043206a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058187 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmpf6\" (UniqueName: \"kubernetes.io/projected/8cba929a-19da-479b-b9fb-b4cffaaba4c2-kube-api-access-vmpf6\") pod \"openshift-config-operator-7777fb866f-5dh8t\" (UID: \"8cba929a-19da-479b-b9fb-b4cffaaba4c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058210 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xtn8\" (UniqueName: \"kubernetes.io/projected/b5f3c960-a56e-4c0e-82da-c8a39167eb8b-kube-api-access-2xtn8\") pod \"cluster-image-registry-operator-dc59b4c8b-qxqhz\" (UID: \"b5f3c960-a56e-4c0e-82da-c8a39167eb8b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058228 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60ffdf66-0472-4e1f-9ea6-869acc338d0e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-spsw9\" (UID: \"60ffdf66-0472-4e1f-9ea6-869acc338d0e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-spsw9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058245 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4e8662e0-1de8-4371-8836-214a0394675c-image-import-ca\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058264 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2244349f-df5c-4813-a0e7-418a602f57b0-metrics-certs\") pod \"router-default-5444994796-5nscb\" (UID: \"2244349f-df5c-4813-a0e7-418a602f57b0\") " pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058281 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c613148-89dd-4904-b721-c90f6a0f89ba-config\") pod \"console-operator-58897d9998-xwjmr\" (UID: \"4c613148-89dd-4904-b721-c90f6a0f89ba\") " pod="openshift-console-operator/console-operator-58897d9998-xwjmr" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058314 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/79a96518-940a-4490-9067-9e2f873753f7-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-25dw7\" (UID: \"79a96518-940a-4490-9067-9e2f873753f7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25dw7" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058336 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4p7b\" (UniqueName: \"kubernetes.io/projected/79a96518-940a-4490-9067-9e2f873753f7-kube-api-access-l4p7b\") pod \"cluster-samples-operator-665b6dd947-25dw7\" (UID: \"79a96518-940a-4490-9067-9e2f873753f7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25dw7" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058361 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ec7126b-b0f9-4fff-a11f-76726ce4c4ff-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2prvv\" (UID: \"4ec7126b-b0f9-4fff-a11f-76726ce4c4ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2prvv" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058406 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njhj5\" (UniqueName: \"kubernetes.io/projected/16240ac3-819b-4e68-bca9-c97c94599fbb-kube-api-access-njhj5\") pod \"openshift-apiserver-operator-796bbdcf4f-tq4ph\" (UID: \"16240ac3-819b-4e68-bca9-c97c94599fbb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq4ph" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058421 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/90a4381e-451b-4940-932a-efba1d101c81-auth-proxy-config\") pod \"machine-approver-56656f9798-s2g88\" (UID: \"90a4381e-451b-4940-932a-efba1d101c81\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2g88" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058429 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e8ad493-9466-46d8-8307-13f24463f184-config\") pod \"service-ca-operator-777779d784-n4bqp\" (UID: \"0e8ad493-9466-46d8-8307-13f24463f184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n4bqp" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058499 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4e8662e0-1de8-4371-8836-214a0394675c-audit-dir\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058510 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b8f2f610-05dc-49ea-882e-634d283b3caa-images\") pod \"machine-api-operator-5694c8668f-h6jgn\" (UID: \"b8f2f610-05dc-49ea-882e-634d283b3caa\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h6jgn" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058548 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fffe8f2-59b1-4215-809e-461bc8f5e386-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.059540 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b8f2f610-05dc-49ea-882e-634d283b3caa-images\") pod \"machine-api-operator-5694c8668f-h6jgn\" (UID: \"b8f2f610-05dc-49ea-882e-634d283b3caa\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h6jgn" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.059714 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4e8662e0-1de8-4371-8836-214a0394675c-audit\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.060103 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90a4381e-451b-4940-932a-efba1d101c81-config\") pod \"machine-approver-56656f9798-s2g88\" (UID: \"90a4381e-451b-4940-932a-efba1d101c81\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2g88" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.060895 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2244349f-df5c-4813-a0e7-418a602f57b0-service-ca-bundle\") pod \"router-default-5444994796-5nscb\" (UID: \"2244349f-df5c-4813-a0e7-418a602f57b0\") " pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.061437 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbnnc"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.061471 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-h6jgn"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.061482 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-5zvdg"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.061741 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/222d5540-6b86-404a-b787-ea6a6043206a-config\") pod \"etcd-operator-b45778765-2plqs\" (UID: \"222d5540-6b86-404a-b787-ea6a6043206a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.063272 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9d15a018-8297-4e45-8a62-afa89a267381-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5nql4\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.067209 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/222d5540-6b86-404a-b787-ea6a6043206a-etcd-client\") pod \"etcd-operator-b45778765-2plqs\" (UID: \"222d5540-6b86-404a-b787-ea6a6043206a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.067467 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-95tmb"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.069286 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/49c456f9-6cbf-4e3c-992a-8636357253ad-oauth-serving-cert\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.071487 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/49c456f9-6cbf-4e3c-992a-8636357253ad-console-config\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.071583 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e8662e0-1de8-4371-8836-214a0394675c-serving-cert\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.071947 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4e8662e0-1de8-4371-8836-214a0394675c-etcd-serving-ca\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.072265 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c613148-89dd-4904-b721-c90f6a0f89ba-config\") pod \"console-operator-58897d9998-xwjmr\" (UID: \"4c613148-89dd-4904-b721-c90f6a0f89ba\") " pod="openshift-console-operator/console-operator-58897d9998-xwjmr" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.074404 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d15a018-8297-4e45-8a62-afa89a267381-serving-cert\") pod \"controller-manager-879f6c89f-5nql4\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.075531 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d15a018-8297-4e45-8a62-afa89a267381-client-ca\") pod \"controller-manager-879f6c89f-5nql4\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.075620 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4e8662e0-1de8-4371-8836-214a0394675c-node-pullsecrets\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.075756 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tz9k4"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.075807 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-zjf9d"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.075992 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/222d5540-6b86-404a-b787-ea6a6043206a-etcd-service-ca\") pod \"etcd-operator-b45778765-2plqs\" (UID: \"222d5540-6b86-404a-b787-ea6a6043206a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.076822 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/2244349f-df5c-4813-a0e7-418a602f57b0-default-certificate\") pod \"router-default-5444994796-5nscb\" (UID: \"2244349f-df5c-4813-a0e7-418a602f57b0\") " pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.077299 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9e4f53a6-fcc3-4310-965d-9a5dda91080b-serviceca\") pod \"image-pruner-29488320-jf979\" (UID: \"9e4f53a6-fcc3-4310-965d-9a5dda91080b\") " pod="openshift-image-registry/image-pruner-29488320-jf979" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.077310 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/222d5540-6b86-404a-b787-ea6a6043206a-etcd-ca\") pod \"etcd-operator-b45778765-2plqs\" (UID: \"222d5540-6b86-404a-b787-ea6a6043206a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.077426 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37b7a00e-4def-4d1e-8333-94d15174223b-client-ca\") pod \"route-controller-manager-6576b87f9c-vw66z\" (UID: \"37b7a00e-4def-4d1e-8333-94d15174223b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.077926 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16240ac3-819b-4e68-bca9-c97c94599fbb-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tq4ph\" (UID: \"16240ac3-819b-4e68-bca9-c97c94599fbb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq4ph" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.077931 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4e8662e0-1de8-4371-8836-214a0394675c-image-import-ca\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.078061 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c613148-89dd-4904-b721-c90f6a0f89ba-serving-cert\") pod \"console-operator-58897d9998-xwjmr\" (UID: \"4c613148-89dd-4904-b721-c90f6a0f89ba\") " pod="openshift-console-operator/console-operator-58897d9998-xwjmr" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.078251 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b5f3c960-a56e-4c0e-82da-c8a39167eb8b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-qxqhz\" (UID: \"b5f3c960-a56e-4c0e-82da-c8a39167eb8b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.078948 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.079288 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e8662e0-1de8-4371-8836-214a0394675c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.079330 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-spsw9"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.079356 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mprs4"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.079371 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.079484 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b5f3c960-a56e-4c0e-82da-c8a39167eb8b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-qxqhz\" (UID: \"b5f3c960-a56e-4c0e-82da-c8a39167eb8b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.079865 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8f2f610-05dc-49ea-882e-634d283b3caa-config\") pod \"machine-api-operator-5694c8668f-h6jgn\" (UID: \"b8f2f610-05dc-49ea-882e-634d283b3caa\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h6jgn" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.080273 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4e8662e0-1de8-4371-8836-214a0394675c-encryption-config\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.080288 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4e8662e0-1de8-4371-8836-214a0394675c-etcd-client\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.080620 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa67d1d-92d7-41aa-b72f-aee9bca370fc-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-cmngb\" (UID: \"eaa67d1d-92d7-41aa-b72f-aee9bca370fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cmngb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.080659 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qcsn"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.081012 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16240ac3-819b-4e68-bca9-c97c94599fbb-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tq4ph\" (UID: \"16240ac3-819b-4e68-bca9-c97c94599fbb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq4ph" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.081142 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d15a018-8297-4e45-8a62-afa89a267381-config\") pod \"controller-manager-879f6c89f-5nql4\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.081331 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b7a00e-4def-4d1e-8333-94d15174223b-config\") pod \"route-controller-manager-6576b87f9c-vw66z\" (UID: \"37b7a00e-4def-4d1e-8333-94d15174223b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.081552 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37b7a00e-4def-4d1e-8333-94d15174223b-serving-cert\") pod \"route-controller-manager-6576b87f9c-vw66z\" (UID: \"37b7a00e-4def-4d1e-8333-94d15174223b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.082418 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2244349f-df5c-4813-a0e7-418a602f57b0-metrics-certs\") pod \"router-default-5444994796-5nscb\" (UID: \"2244349f-df5c-4813-a0e7-418a602f57b0\") " pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.082484 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.083281 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/79a96518-940a-4490-9067-9e2f873753f7-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-25dw7\" (UID: \"79a96518-940a-4490-9067-9e2f873753f7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25dw7" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.083505 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/2244349f-df5c-4813-a0e7-418a602f57b0-stats-auth\") pod \"router-default-5444994796-5nscb\" (UID: \"2244349f-df5c-4813-a0e7-418a602f57b0\") " pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.083611 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vx9fn"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.083701 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8cba929a-19da-479b-b9fb-b4cffaaba4c2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5dh8t\" (UID: \"8cba929a-19da-479b-b9fb-b4cffaaba4c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.083790 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b50bda2b-e707-456e-af02-796b6d9a4cdf-metrics-tls\") pod \"dns-operator-744455d44c-zjf9d\" (UID: \"b50bda2b-e707-456e-af02-796b6d9a4cdf\") " pod="openshift-dns-operator/dns-operator-744455d44c-zjf9d" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.084190 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/222d5540-6b86-404a-b787-ea6a6043206a-serving-cert\") pod \"etcd-operator-b45778765-2plqs\" (UID: \"222d5540-6b86-404a-b787-ea6a6043206a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.084667 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b8f2f610-05dc-49ea-882e-634d283b3caa-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-h6jgn\" (UID: \"b8f2f610-05dc-49ea-882e-634d283b3caa\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h6jgn" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.084686 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-glppw"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.085683 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vwtw5"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.086694 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.087822 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-5s2mh"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.087982 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.088755 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaa67d1d-92d7-41aa-b72f-aee9bca370fc-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-cmngb\" (UID: \"eaa67d1d-92d7-41aa-b72f-aee9bca370fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cmngb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.088773 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cba929a-19da-479b-b9fb-b4cffaaba4c2-serving-cert\") pod \"openshift-config-operator-7777fb866f-5dh8t\" (UID: \"8cba929a-19da-479b-b9fb-b4cffaaba4c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.089477 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5s2mh" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.097314 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pjjgh"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.097460 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.100454 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c456f9-6cbf-4e3c-992a-8636357253ad-trusted-ca-bundle\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.101188 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.102973 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-cxmmw"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.105847 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-n4bqp"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.106983 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.108284 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.109302 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/90a4381e-451b-4940-932a-efba1d101c81-machine-approver-tls\") pod \"machine-approver-56656f9798-s2g88\" (UID: \"90a4381e-451b-4940-932a-efba1d101c81\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2g88" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.114087 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.116269 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.118014 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2prvv"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.119343 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60ffdf66-0472-4e1f-9ea6-869acc338d0e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-spsw9\" (UID: \"60ffdf66-0472-4e1f-9ea6-869acc338d0e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-spsw9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.119467 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-qgqfk"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.120693 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5s2mh"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.122303 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.122873 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-k7fhc"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.124026 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-wfcjp"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.125021 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-wfcjp" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.125274 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pjjgh"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.129556 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60ffdf66-0472-4e1f-9ea6-869acc338d0e-config\") pod \"kube-controller-manager-operator-78b949d7b-spsw9\" (UID: \"60ffdf66-0472-4e1f-9ea6-869acc338d0e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-spsw9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.144685 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.156898 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/49c456f9-6cbf-4e3c-992a-8636357253ad-console-oauth-config\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.159723 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7e230409-6e68-4f7c-b0c3-3e55433b22c1-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-tbnnc\" (UID: \"7e230409-6e68-4f7c-b0c3-3e55433b22c1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbnnc" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.159868 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0fffe8f2-59b1-4215-809e-461bc8f5e386-audit-policies\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.159958 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0fffe8f2-59b1-4215-809e-461bc8f5e386-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.160050 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgmn4\" (UniqueName: \"kubernetes.io/projected/c04cc1eb-ec23-4876-afd1-f123c04cdc8a-kube-api-access-dgmn4\") pod \"service-ca-9c57cc56f-lz644\" (UID: \"c04cc1eb-ec23-4876-afd1-f123c04cdc8a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lz644" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.160153 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzfhp\" (UniqueName: \"kubernetes.io/projected/0e8ad493-9466-46d8-8307-13f24463f184-kube-api-access-pzfhp\") pod \"service-ca-operator-777779d784-n4bqp\" (UID: \"0e8ad493-9466-46d8-8307-13f24463f184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n4bqp" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.160249 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/633ed9d6-a859-48e4-a100-f8e5bb3d6cfd-images\") pod \"machine-config-operator-74547568cd-cms5v\" (UID: \"633ed9d6-a859-48e4-a100-f8e5bb3d6cfd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.160324 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/633ed9d6-a859-48e4-a100-f8e5bb3d6cfd-proxy-tls\") pod \"machine-config-operator-74547568cd-cms5v\" (UID: \"633ed9d6-a859-48e4-a100-f8e5bb3d6cfd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.160468 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/17242dc8-e334-406d-ad0a-5dc9ecdf0d6a-tmpfs\") pod \"packageserver-d55dfcdfc-bg9x9\" (UID: \"17242dc8-e334-406d-ad0a-5dc9ecdf0d6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.160592 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/17242dc8-e334-406d-ad0a-5dc9ecdf0d6a-apiservice-cert\") pod \"packageserver-d55dfcdfc-bg9x9\" (UID: \"17242dc8-e334-406d-ad0a-5dc9ecdf0d6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.160739 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgr5k\" (UniqueName: \"kubernetes.io/projected/7e230409-6e68-4f7c-b0c3-3e55433b22c1-kube-api-access-fgr5k\") pod \"package-server-manager-789f6589d5-tbnnc\" (UID: \"7e230409-6e68-4f7c-b0c3-3e55433b22c1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbnnc" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.160867 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d41f4b03-bff1-4ba1-a54b-b3e78014ecb8-metrics-tls\") pod \"ingress-operator-5b745b69d9-tsj7s\" (UID: \"d41f4b03-bff1-4ba1-a54b-b3e78014ecb8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.160976 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d41f4b03-bff1-4ba1-a54b-b3e78014ecb8-bound-sa-token\") pod \"ingress-operator-5b745b69d9-tsj7s\" (UID: \"d41f4b03-bff1-4ba1-a54b-b3e78014ecb8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.161067 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e8f132b-916b-4973-9873-5919cb12251c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-glppw\" (UID: \"1e8f132b-916b-4973-9873-5919cb12251c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-glppw" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.161171 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0fffe8f2-59b1-4215-809e-461bc8f5e386-encryption-config\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.161248 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/17242dc8-e334-406d-ad0a-5dc9ecdf0d6a-webhook-cert\") pod \"packageserver-d55dfcdfc-bg9x9\" (UID: \"17242dc8-e334-406d-ad0a-5dc9ecdf0d6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.161319 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/633ed9d6-a859-48e4-a100-f8e5bb3d6cfd-auth-proxy-config\") pod \"machine-config-operator-74547568cd-cms5v\" (UID: \"633ed9d6-a859-48e4-a100-f8e5bb3d6cfd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.161425 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6a491f6-3829-4c9d-88cb-a49864576106-srv-cert\") pod \"olm-operator-6b444d44fb-nkrs8\" (UID: \"b6a491f6-3829-4c9d-88cb-a49864576106\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.161529 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql48r\" (UniqueName: \"kubernetes.io/projected/b6a491f6-3829-4c9d-88cb-a49864576106-kube-api-access-ql48r\") pod \"olm-operator-6b444d44fb-nkrs8\" (UID: \"b6a491f6-3829-4c9d-88cb-a49864576106\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.161622 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qzff\" (UniqueName: \"kubernetes.io/projected/f56c1338-08c8-47de-b24a-3aaf85e315f8-kube-api-access-5qzff\") pod \"control-plane-machine-set-operator-78cbb6b69f-vwtw5\" (UID: \"f56c1338-08c8-47de-b24a-3aaf85e315f8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vwtw5" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.161724 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f56c1338-08c8-47de-b24a-3aaf85e315f8-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-vwtw5\" (UID: \"f56c1338-08c8-47de-b24a-3aaf85e315f8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vwtw5" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.161806 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c04cc1eb-ec23-4876-afd1-f123c04cdc8a-signing-key\") pod \"service-ca-9c57cc56f-lz644\" (UID: \"c04cc1eb-ec23-4876-afd1-f123c04cdc8a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lz644" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.161917 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d41f4b03-bff1-4ba1-a54b-b3e78014ecb8-trusted-ca\") pod \"ingress-operator-5b745b69d9-tsj7s\" (UID: \"d41f4b03-bff1-4ba1-a54b-b3e78014ecb8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.162002 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fffe8f2-59b1-4215-809e-461bc8f5e386-serving-cert\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.161274 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/17242dc8-e334-406d-ad0a-5dc9ecdf0d6a-tmpfs\") pod \"packageserver-d55dfcdfc-bg9x9\" (UID: \"17242dc8-e334-406d-ad0a-5dc9ecdf0d6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.162193 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0fffe8f2-59b1-4215-809e-461bc8f5e386-etcd-client\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.162280 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6a491f6-3829-4c9d-88cb-a49864576106-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nkrs8\" (UID: \"b6a491f6-3829-4c9d-88cb-a49864576106\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.162378 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpn25\" (UniqueName: \"kubernetes.io/projected/0fffe8f2-59b1-4215-809e-461bc8f5e386-kube-api-access-xpn25\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.162506 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ec7126b-b0f9-4fff-a11f-76726ce4c4ff-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2prvv\" (UID: \"4ec7126b-b0f9-4fff-a11f-76726ce4c4ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2prvv" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.162601 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e8ad493-9466-46d8-8307-13f24463f184-config\") pod \"service-ca-operator-777779d784-n4bqp\" (UID: \"0e8ad493-9466-46d8-8307-13f24463f184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n4bqp" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.162274 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/633ed9d6-a859-48e4-a100-f8e5bb3d6cfd-auth-proxy-config\") pod \"machine-config-operator-74547568cd-cms5v\" (UID: \"633ed9d6-a859-48e4-a100-f8e5bb3d6cfd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.162680 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fffe8f2-59b1-4215-809e-461bc8f5e386-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.162801 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e8ad493-9466-46d8-8307-13f24463f184-serving-cert\") pod \"service-ca-operator-777779d784-n4bqp\" (UID: \"0e8ad493-9466-46d8-8307-13f24463f184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n4bqp" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.162846 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9p5g\" (UniqueName: \"kubernetes.io/projected/633ed9d6-a859-48e4-a100-f8e5bb3d6cfd-kube-api-access-j9p5g\") pod \"machine-config-operator-74547568cd-cms5v\" (UID: \"633ed9d6-a859-48e4-a100-f8e5bb3d6cfd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.162892 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.162913 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c04cc1eb-ec23-4876-afd1-f123c04cdc8a-signing-cabundle\") pod \"service-ca-9c57cc56f-lz644\" (UID: \"c04cc1eb-ec23-4876-afd1-f123c04cdc8a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lz644" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.162941 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7brwl\" (UniqueName: \"kubernetes.io/projected/d41f4b03-bff1-4ba1-a54b-b3e78014ecb8-kube-api-access-7brwl\") pod \"ingress-operator-5b745b69d9-tsj7s\" (UID: \"d41f4b03-bff1-4ba1-a54b-b3e78014ecb8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.162971 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e8f132b-916b-4973-9873-5919cb12251c-config\") pod \"kube-apiserver-operator-766d6c64bb-glppw\" (UID: \"1e8f132b-916b-4973-9873-5919cb12251c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-glppw" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.162993 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e8f132b-916b-4973-9873-5919cb12251c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-glppw\" (UID: \"1e8f132b-916b-4973-9873-5919cb12251c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-glppw" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.163035 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0fffe8f2-59b1-4215-809e-461bc8f5e386-audit-dir\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.163064 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ec7126b-b0f9-4fff-a11f-76726ce4c4ff-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2prvv\" (UID: \"4ec7126b-b0f9-4fff-a11f-76726ce4c4ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2prvv" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.163092 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx5zf\" (UniqueName: \"kubernetes.io/projected/17242dc8-e334-406d-ad0a-5dc9ecdf0d6a-kube-api-access-jx5zf\") pod \"packageserver-d55dfcdfc-bg9x9\" (UID: \"17242dc8-e334-406d-ad0a-5dc9ecdf0d6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.163121 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ec7126b-b0f9-4fff-a11f-76726ce4c4ff-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2prvv\" (UID: \"4ec7126b-b0f9-4fff-a11f-76726ce4c4ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2prvv" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.163313 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0fffe8f2-59b1-4215-809e-461bc8f5e386-audit-dir\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.168882 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c456f9-6cbf-4e3c-992a-8636357253ad-console-serving-cert\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.182844 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.190229 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/49c456f9-6cbf-4e3c-992a-8636357253ad-service-ca\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.202982 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.223570 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.231891 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/633ed9d6-a859-48e4-a100-f8e5bb3d6cfd-images\") pod \"machine-config-operator-74547568cd-cms5v\" (UID: \"633ed9d6-a859-48e4-a100-f8e5bb3d6cfd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.242907 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.262238 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.274392 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/633ed9d6-a859-48e4-a100-f8e5bb3d6cfd-proxy-tls\") pod \"machine-config-operator-74547568cd-cms5v\" (UID: \"633ed9d6-a859-48e4-a100-f8e5bb3d6cfd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.282517 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.312328 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.324531 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.324827 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d41f4b03-bff1-4ba1-a54b-b3e78014ecb8-trusted-ca\") pod \"ingress-operator-5b745b69d9-tsj7s\" (UID: \"d41f4b03-bff1-4ba1-a54b-b3e78014ecb8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.343665 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.362559 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.374457 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d41f4b03-bff1-4ba1-a54b-b3e78014ecb8-metrics-tls\") pod \"ingress-operator-5b745b69d9-tsj7s\" (UID: \"d41f4b03-bff1-4ba1-a54b-b3e78014ecb8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.382979 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.403105 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.423984 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.453492 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.463421 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.483336 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.503819 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.523175 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.535546 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e8f132b-916b-4973-9873-5919cb12251c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-glppw\" (UID: \"1e8f132b-916b-4973-9873-5919cb12251c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-glppw" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.543735 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.563862 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.582886 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.585169 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e8f132b-916b-4973-9873-5919cb12251c-config\") pod \"kube-apiserver-operator-766d6c64bb-glppw\" (UID: \"1e8f132b-916b-4973-9873-5919cb12251c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-glppw" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.603738 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.624228 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.644564 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.663921 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.683591 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.703226 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.718335 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0fffe8f2-59b1-4215-809e-461bc8f5e386-etcd-client\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.723745 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.736184 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fffe8f2-59b1-4215-809e-461bc8f5e386-serving-cert\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.743500 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.756633 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0fffe8f2-59b1-4215-809e-461bc8f5e386-encryption-config\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.764913 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.783662 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.803031 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.823323 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.842887 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.851413 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0fffe8f2-59b1-4215-809e-461bc8f5e386-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.863263 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.871984 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0fffe8f2-59b1-4215-809e-461bc8f5e386-audit-policies\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.883981 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.894306 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fffe8f2-59b1-4215-809e-461bc8f5e386-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.903024 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.921192 4947 request.go:700] Waited for 1.012516009s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.922651 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.944037 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.964758 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.976223 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f56c1338-08c8-47de-b24a-3aaf85e315f8-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-vwtw5\" (UID: \"f56c1338-08c8-47de-b24a-3aaf85e315f8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vwtw5" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.983393 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.030937 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.031185 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.038969 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6a491f6-3829-4c9d-88cb-a49864576106-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nkrs8\" (UID: \"b6a491f6-3829-4c9d-88cb-a49864576106\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.042668 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.063072 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.083342 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.089452 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.089522 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.089627 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.103895 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.124081 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.134550 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7e230409-6e68-4f7c-b0c3-3e55433b22c1-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-tbnnc\" (UID: \"7e230409-6e68-4f7c-b0c3-3e55433b22c1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbnnc" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.144248 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 25 00:11:38 crc kubenswrapper[4947]: E0125 00:11:38.161597 4947 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 25 00:11:38 crc kubenswrapper[4947]: E0125 00:11:38.161724 4947 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 25 00:11:38 crc kubenswrapper[4947]: E0125 00:11:38.161764 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17242dc8-e334-406d-ad0a-5dc9ecdf0d6a-webhook-cert podName:17242dc8-e334-406d-ad0a-5dc9ecdf0d6a nodeName:}" failed. No retries permitted until 2026-01-25 00:11:38.66172451 +0000 UTC m=+137.894714970 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/17242dc8-e334-406d-ad0a-5dc9ecdf0d6a-webhook-cert") pod "packageserver-d55dfcdfc-bg9x9" (UID: "17242dc8-e334-406d-ad0a-5dc9ecdf0d6a") : failed to sync secret cache: timed out waiting for the condition Jan 25 00:11:38 crc kubenswrapper[4947]: E0125 00:11:38.161838 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17242dc8-e334-406d-ad0a-5dc9ecdf0d6a-apiservice-cert podName:17242dc8-e334-406d-ad0a-5dc9ecdf0d6a nodeName:}" failed. No retries permitted until 2026-01-25 00:11:38.661802582 +0000 UTC m=+137.894793052 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/17242dc8-e334-406d-ad0a-5dc9ecdf0d6a-apiservice-cert") pod "packageserver-d55dfcdfc-bg9x9" (UID: "17242dc8-e334-406d-ad0a-5dc9ecdf0d6a") : failed to sync secret cache: timed out waiting for the condition Jan 25 00:11:38 crc kubenswrapper[4947]: E0125 00:11:38.161979 4947 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 25 00:11:38 crc kubenswrapper[4947]: E0125 00:11:38.161985 4947 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Jan 25 00:11:38 crc kubenswrapper[4947]: E0125 00:11:38.162080 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6a491f6-3829-4c9d-88cb-a49864576106-srv-cert podName:b6a491f6-3829-4c9d-88cb-a49864576106 nodeName:}" failed. No retries permitted until 2026-01-25 00:11:38.662053948 +0000 UTC m=+137.895044428 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b6a491f6-3829-4c9d-88cb-a49864576106-srv-cert") pod "olm-operator-6b444d44fb-nkrs8" (UID: "b6a491f6-3829-4c9d-88cb-a49864576106") : failed to sync secret cache: timed out waiting for the condition Jan 25 00:11:38 crc kubenswrapper[4947]: E0125 00:11:38.162235 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c04cc1eb-ec23-4876-afd1-f123c04cdc8a-signing-key podName:c04cc1eb-ec23-4876-afd1-f123c04cdc8a nodeName:}" failed. No retries permitted until 2026-01-25 00:11:38.662093329 +0000 UTC m=+137.895083809 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/c04cc1eb-ec23-4876-afd1-f123c04cdc8a-signing-key") pod "service-ca-9c57cc56f-lz644" (UID: "c04cc1eb-ec23-4876-afd1-f123c04cdc8a") : failed to sync secret cache: timed out waiting for the condition Jan 25 00:11:38 crc kubenswrapper[4947]: E0125 00:11:38.163182 4947 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 25 00:11:38 crc kubenswrapper[4947]: E0125 00:11:38.163206 4947 secret.go:188] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 25 00:11:38 crc kubenswrapper[4947]: E0125 00:11:38.163269 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0e8ad493-9466-46d8-8307-13f24463f184-config podName:0e8ad493-9466-46d8-8307-13f24463f184 nodeName:}" failed. No retries permitted until 2026-01-25 00:11:38.66324595 +0000 UTC m=+137.896236640 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0e8ad493-9466-46d8-8307-13f24463f184-config") pod "service-ca-operator-777779d784-n4bqp" (UID: "0e8ad493-9466-46d8-8307-13f24463f184") : failed to sync configmap cache: timed out waiting for the condition Jan 25 00:11:38 crc kubenswrapper[4947]: E0125 00:11:38.163314 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ec7126b-b0f9-4fff-a11f-76726ce4c4ff-serving-cert podName:4ec7126b-b0f9-4fff-a11f-76726ce4c4ff nodeName:}" failed. No retries permitted until 2026-01-25 00:11:38.663294611 +0000 UTC m=+137.896285091 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4ec7126b-b0f9-4fff-a11f-76726ce4c4ff-serving-cert") pod "openshift-kube-scheduler-operator-5fdd9b5758-2prvv" (UID: "4ec7126b-b0f9-4fff-a11f-76726ce4c4ff") : failed to sync secret cache: timed out waiting for the condition Jan 25 00:11:38 crc kubenswrapper[4947]: E0125 00:11:38.163328 4947 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 25 00:11:38 crc kubenswrapper[4947]: E0125 00:11:38.163455 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e8ad493-9466-46d8-8307-13f24463f184-serving-cert podName:0e8ad493-9466-46d8-8307-13f24463f184 nodeName:}" failed. No retries permitted until 2026-01-25 00:11:38.663401984 +0000 UTC m=+137.896392454 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0e8ad493-9466-46d8-8307-13f24463f184-serving-cert") pod "service-ca-operator-777779d784-n4bqp" (UID: "0e8ad493-9466-46d8-8307-13f24463f184") : failed to sync secret cache: timed out waiting for the condition Jan 25 00:11:38 crc kubenswrapper[4947]: E0125 00:11:38.164608 4947 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 25 00:11:38 crc kubenswrapper[4947]: E0125 00:11:38.164680 4947 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Jan 25 00:11:38 crc kubenswrapper[4947]: E0125 00:11:38.164752 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4ec7126b-b0f9-4fff-a11f-76726ce4c4ff-config podName:4ec7126b-b0f9-4fff-a11f-76726ce4c4ff nodeName:}" failed. No retries permitted until 2026-01-25 00:11:38.664723098 +0000 UTC m=+137.897713578 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4ec7126b-b0f9-4fff-a11f-76726ce4c4ff-config") pod "openshift-kube-scheduler-operator-5fdd9b5758-2prvv" (UID: "4ec7126b-b0f9-4fff-a11f-76726ce4c4ff") : failed to sync configmap cache: timed out waiting for the condition Jan 25 00:11:38 crc kubenswrapper[4947]: E0125 00:11:38.164789 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c04cc1eb-ec23-4876-afd1-f123c04cdc8a-signing-cabundle podName:c04cc1eb-ec23-4876-afd1-f123c04cdc8a nodeName:}" failed. No retries permitted until 2026-01-25 00:11:38.664767229 +0000 UTC m=+137.897757699 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/c04cc1eb-ec23-4876-afd1-f123c04cdc8a-signing-cabundle") pod "service-ca-9c57cc56f-lz644" (UID: "c04cc1eb-ec23-4876-afd1-f123c04cdc8a") : failed to sync configmap cache: timed out waiting for the condition Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.164792 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.183729 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.202109 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.223664 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.243518 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.265787 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.282972 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.303754 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.323866 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.343195 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.364366 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.383745 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.404707 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.424791 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.443757 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.463084 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.513956 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbbs4\" (UniqueName: \"kubernetes.io/projected/d3a733c1-a1cf-42ef-a056-27185292354f-kube-api-access-fbbs4\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.538186 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm9wz\" (UniqueName: \"kubernetes.io/projected/bf8b174c-d1eb-4a2d-88c2-113302fa2300-kube-api-access-tm9wz\") pod \"authentication-operator-69f744f599-nt9qq\" (UID: \"bf8b174c-d1eb-4a2d-88c2-113302fa2300\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.544164 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.561495 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.566812 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.583641 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.604612 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.606444 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.624281 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.645043 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.668699 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.683255 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.691073 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/17242dc8-e334-406d-ad0a-5dc9ecdf0d6a-webhook-cert\") pod \"packageserver-d55dfcdfc-bg9x9\" (UID: \"17242dc8-e334-406d-ad0a-5dc9ecdf0d6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.691164 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6a491f6-3829-4c9d-88cb-a49864576106-srv-cert\") pod \"olm-operator-6b444d44fb-nkrs8\" (UID: \"b6a491f6-3829-4c9d-88cb-a49864576106\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.691218 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c04cc1eb-ec23-4876-afd1-f123c04cdc8a-signing-key\") pod \"service-ca-9c57cc56f-lz644\" (UID: \"c04cc1eb-ec23-4876-afd1-f123c04cdc8a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lz644" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.691311 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ec7126b-b0f9-4fff-a11f-76726ce4c4ff-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2prvv\" (UID: \"4ec7126b-b0f9-4fff-a11f-76726ce4c4ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2prvv" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.691368 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e8ad493-9466-46d8-8307-13f24463f184-config\") pod \"service-ca-operator-777779d784-n4bqp\" (UID: \"0e8ad493-9466-46d8-8307-13f24463f184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n4bqp" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.691392 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e8ad493-9466-46d8-8307-13f24463f184-serving-cert\") pod \"service-ca-operator-777779d784-n4bqp\" (UID: \"0e8ad493-9466-46d8-8307-13f24463f184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n4bqp" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.691460 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c04cc1eb-ec23-4876-afd1-f123c04cdc8a-signing-cabundle\") pod \"service-ca-9c57cc56f-lz644\" (UID: \"c04cc1eb-ec23-4876-afd1-f123c04cdc8a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lz644" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.691505 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ec7126b-b0f9-4fff-a11f-76726ce4c4ff-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2prvv\" (UID: \"4ec7126b-b0f9-4fff-a11f-76726ce4c4ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2prvv" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.691616 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/17242dc8-e334-406d-ad0a-5dc9ecdf0d6a-apiservice-cert\") pod \"packageserver-d55dfcdfc-bg9x9\" (UID: \"17242dc8-e334-406d-ad0a-5dc9ecdf0d6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.693775 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c04cc1eb-ec23-4876-afd1-f123c04cdc8a-signing-cabundle\") pod \"service-ca-9c57cc56f-lz644\" (UID: \"c04cc1eb-ec23-4876-afd1-f123c04cdc8a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lz644" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.694603 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ec7126b-b0f9-4fff-a11f-76726ce4c4ff-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2prvv\" (UID: \"4ec7126b-b0f9-4fff-a11f-76726ce4c4ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2prvv" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.698463 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/17242dc8-e334-406d-ad0a-5dc9ecdf0d6a-webhook-cert\") pod \"packageserver-d55dfcdfc-bg9x9\" (UID: \"17242dc8-e334-406d-ad0a-5dc9ecdf0d6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.698979 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/17242dc8-e334-406d-ad0a-5dc9ecdf0d6a-apiservice-cert\") pod \"packageserver-d55dfcdfc-bg9x9\" (UID: \"17242dc8-e334-406d-ad0a-5dc9ecdf0d6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.701007 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6a491f6-3829-4c9d-88cb-a49864576106-srv-cert\") pod \"olm-operator-6b444d44fb-nkrs8\" (UID: \"b6a491f6-3829-4c9d-88cb-a49864576106\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.705006 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c04cc1eb-ec23-4876-afd1-f123c04cdc8a-signing-key\") pod \"service-ca-9c57cc56f-lz644\" (UID: \"c04cc1eb-ec23-4876-afd1-f123c04cdc8a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lz644" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.708995 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ec7126b-b0f9-4fff-a11f-76726ce4c4ff-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2prvv\" (UID: \"4ec7126b-b0f9-4fff-a11f-76726ce4c4ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2prvv" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.710992 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e8ad493-9466-46d8-8307-13f24463f184-serving-cert\") pod \"service-ca-operator-777779d784-n4bqp\" (UID: \"0e8ad493-9466-46d8-8307-13f24463f184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n4bqp" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.745028 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9twfs\" (UniqueName: \"kubernetes.io/projected/b50bda2b-e707-456e-af02-796b6d9a4cdf-kube-api-access-9twfs\") pod \"dns-operator-744455d44c-zjf9d\" (UID: \"b50bda2b-e707-456e-af02-796b6d9a4cdf\") " pod="openshift-dns-operator/dns-operator-744455d44c-zjf9d" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.767300 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x956c\" (UniqueName: \"kubernetes.io/projected/37b7a00e-4def-4d1e-8333-94d15174223b-kube-api-access-x956c\") pod \"route-controller-manager-6576b87f9c-vw66z\" (UID: \"37b7a00e-4def-4d1e-8333-94d15174223b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.784232 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4jhb\" (UniqueName: \"kubernetes.io/projected/49c456f9-6cbf-4e3c-992a-8636357253ad-kube-api-access-l4jhb\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.801410 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhc9z\" (UniqueName: \"kubernetes.io/projected/b8f2f610-05dc-49ea-882e-634d283b3caa-kube-api-access-dhc9z\") pod \"machine-api-operator-5694c8668f-h6jgn\" (UID: \"b8f2f610-05dc-49ea-882e-634d283b3caa\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h6jgn" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.818275 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf4jd\" (UniqueName: \"kubernetes.io/projected/2244349f-df5c-4813-a0e7-418a602f57b0-kube-api-access-mf4jd\") pod \"router-default-5444994796-5nscb\" (UID: \"2244349f-df5c-4813-a0e7-418a602f57b0\") " pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.838481 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4p7b\" (UniqueName: \"kubernetes.io/projected/79a96518-940a-4490-9067-9e2f873753f7-kube-api-access-l4p7b\") pod \"cluster-samples-operator-665b6dd947-25dw7\" (UID: \"79a96518-940a-4490-9067-9e2f873753f7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25dw7" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.862283 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njhj5\" (UniqueName: \"kubernetes.io/projected/16240ac3-819b-4e68-bca9-c97c94599fbb-kube-api-access-njhj5\") pod \"openshift-apiserver-operator-796bbdcf4f-tq4ph\" (UID: \"16240ac3-819b-4e68-bca9-c97c94599fbb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq4ph" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.868250 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq4ph" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.884574 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqkd6\" (UniqueName: \"kubernetes.io/projected/4c613148-89dd-4904-b721-c90f6a0f89ba-kube-api-access-qqkd6\") pod \"console-operator-58897d9998-xwjmr\" (UID: \"4c613148-89dd-4904-b721-c90f6a0f89ba\") " pod="openshift-console-operator/console-operator-58897d9998-xwjmr" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.887462 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.895879 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-zjf9d" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.901278 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6msz\" (UniqueName: \"kubernetes.io/projected/9e4f53a6-fcc3-4310-965d-9a5dda91080b-kube-api-access-k6msz\") pod \"image-pruner-29488320-jf979\" (UID: \"9e4f53a6-fcc3-4310-965d-9a5dda91080b\") " pod="openshift-image-registry/image-pruner-29488320-jf979" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.906783 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-h6jgn" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.925649 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.926473 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.929702 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmpf6\" (UniqueName: \"kubernetes.io/projected/8cba929a-19da-479b-b9fb-b4cffaaba4c2-kube-api-access-vmpf6\") pod \"openshift-config-operator-7777fb866f-5dh8t\" (UID: \"8cba929a-19da-479b-b9fb-b4cffaaba4c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.943312 4947 request.go:700] Waited for 1.866333379s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/serviceaccounts/kube-controller-manager-operator/token Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.946179 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xtn8\" (UniqueName: \"kubernetes.io/projected/b5f3c960-a56e-4c0e-82da-c8a39167eb8b-kube-api-access-2xtn8\") pod \"cluster-image-registry-operator-dc59b4c8b-qxqhz\" (UID: \"b5f3c960-a56e-4c0e-82da-c8a39167eb8b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.964998 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60ffdf66-0472-4e1f-9ea6-869acc338d0e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-spsw9\" (UID: \"60ffdf66-0472-4e1f-9ea6-869acc338d0e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-spsw9" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.993940 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29488320-jf979" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.005963 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4qnc\" (UniqueName: \"kubernetes.io/projected/222d5540-6b86-404a-b787-ea6a6043206a-kube-api-access-s4qnc\") pod \"etcd-operator-b45778765-2plqs\" (UID: \"222d5540-6b86-404a-b787-ea6a6043206a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.019052 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsx9g\" (UniqueName: \"kubernetes.io/projected/9d15a018-8297-4e45-8a62-afa89a267381-kube-api-access-zsx9g\") pod \"controller-manager-879f6c89f-5nql4\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.022740 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25dw7" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.043504 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mtfs\" (UniqueName: \"kubernetes.io/projected/eaa67d1d-92d7-41aa-b72f-aee9bca370fc-kube-api-access-9mtfs\") pod \"openshift-controller-manager-operator-756b6f6bc6-cmngb\" (UID: \"eaa67d1d-92d7-41aa-b72f-aee9bca370fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cmngb" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.045886 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5nscb" event={"ID":"2244349f-df5c-4813-a0e7-418a602f57b0","Type":"ContainerStarted","Data":"c9c497f538fb0eb148a67a905e3be1557ae1075ee9155bbbe0a049da1f40ded4"} Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.055419 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.082951 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b5f3c960-a56e-4c0e-82da-c8a39167eb8b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-qxqhz\" (UID: \"b5f3c960-a56e-4c0e-82da-c8a39167eb8b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.097400 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvf82\" (UniqueName: \"kubernetes.io/projected/4e8662e0-1de8-4371-8836-214a0394675c-kube-api-access-lvf82\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.103714 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.123031 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.141519 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-xwjmr" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.142817 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.162941 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.190301 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.191710 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.203753 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.223559 4947 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.235449 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-spsw9" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.243382 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.263534 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.273750 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.282900 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.283741 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.304582 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.318384 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cmngb" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.339218 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.351777 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgmn4\" (UniqueName: \"kubernetes.io/projected/c04cc1eb-ec23-4876-afd1-f123c04cdc8a-kube-api-access-dgmn4\") pod \"service-ca-9c57cc56f-lz644\" (UID: \"c04cc1eb-ec23-4876-afd1-f123c04cdc8a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lz644" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.361865 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-lz644" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.372449 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzfhp\" (UniqueName: \"kubernetes.io/projected/0e8ad493-9466-46d8-8307-13f24463f184-kube-api-access-pzfhp\") pod \"service-ca-operator-777779d784-n4bqp\" (UID: \"0e8ad493-9466-46d8-8307-13f24463f184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n4bqp" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.389876 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgr5k\" (UniqueName: \"kubernetes.io/projected/7e230409-6e68-4f7c-b0c3-3e55433b22c1-kube-api-access-fgr5k\") pod \"package-server-manager-789f6589d5-tbnnc\" (UID: \"7e230409-6e68-4f7c-b0c3-3e55433b22c1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbnnc" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.411341 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d41f4b03-bff1-4ba1-a54b-b3e78014ecb8-bound-sa-token\") pod \"ingress-operator-5b745b69d9-tsj7s\" (UID: \"d41f4b03-bff1-4ba1-a54b-b3e78014ecb8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.432434 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql48r\" (UniqueName: \"kubernetes.io/projected/b6a491f6-3829-4c9d-88cb-a49864576106-kube-api-access-ql48r\") pod \"olm-operator-6b444d44fb-nkrs8\" (UID: \"b6a491f6-3829-4c9d-88cb-a49864576106\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.449772 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qzff\" (UniqueName: \"kubernetes.io/projected/f56c1338-08c8-47de-b24a-3aaf85e315f8-kube-api-access-5qzff\") pod \"control-plane-machine-set-operator-78cbb6b69f-vwtw5\" (UID: \"f56c1338-08c8-47de-b24a-3aaf85e315f8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vwtw5" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.462467 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpn25\" (UniqueName: \"kubernetes.io/projected/0fffe8f2-59b1-4215-809e-461bc8f5e386-kube-api-access-xpn25\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.492344 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7brwl\" (UniqueName: \"kubernetes.io/projected/d41f4b03-bff1-4ba1-a54b-b3e78014ecb8-kube-api-access-7brwl\") pod \"ingress-operator-5b745b69d9-tsj7s\" (UID: \"d41f4b03-bff1-4ba1-a54b-b3e78014ecb8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.514248 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9p5g\" (UniqueName: \"kubernetes.io/projected/633ed9d6-a859-48e4-a100-f8e5bb3d6cfd-kube-api-access-j9p5g\") pod \"machine-config-operator-74547568cd-cms5v\" (UID: \"633ed9d6-a859-48e4-a100-f8e5bb3d6cfd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.531712 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ec7126b-b0f9-4fff-a11f-76726ce4c4ff-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2prvv\" (UID: \"4ec7126b-b0f9-4fff-a11f-76726ce4c4ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2prvv" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.543329 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.544224 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx5zf\" (UniqueName: \"kubernetes.io/projected/17242dc8-e334-406d-ad0a-5dc9ecdf0d6a-kube-api-access-jx5zf\") pod \"packageserver-d55dfcdfc-bg9x9\" (UID: \"17242dc8-e334-406d-ad0a-5dc9ecdf0d6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.552179 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.564362 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.570798 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e8f132b-916b-4973-9873-5919cb12251c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-glppw\" (UID: \"1e8f132b-916b-4973-9873-5919cb12251c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-glppw" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.583843 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.599927 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.603858 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.614577 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vwtw5" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.624025 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.631834 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.640655 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbnnc" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.162414 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6xwl\" (UniqueName: \"kubernetes.io/projected/0e97ae5e-35ab-41e9-aa03-ad060bbbd676-kube-api-access-z6xwl\") pod \"downloads-7954f5f757-5zvdg\" (UID: \"0e97ae5e-35ab-41e9-aa03-ad060bbbd676\") " pod="openshift-console/downloads-7954f5f757-5zvdg" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.162502 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2prvv" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.162881 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.163840 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-5zvdg" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.164663 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ce1b6238-9a41-4472-accc-e4d7d6371357-bound-sa-token\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.164758 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ce1b6238-9a41-4472-accc-e4d7d6371357-registry-certificates\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.164826 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ce1b6238-9a41-4472-accc-e4d7d6371357-installation-pull-secrets\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.164878 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ce1b6238-9a41-4472-accc-e4d7d6371357-ca-trust-extracted\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.164999 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ce1b6238-9a41-4472-accc-e4d7d6371357-registry-tls\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.167228 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.167351 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e8ad493-9466-46d8-8307-13f24463f184-config\") pod \"service-ca-operator-777779d784-n4bqp\" (UID: \"0e8ad493-9466-46d8-8307-13f24463f184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n4bqp" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.168377 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-glppw" Jan 25 00:11:40 crc kubenswrapper[4947]: E0125 00:11:40.181680 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:40.681586761 +0000 UTC m=+139.914577241 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.185537 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww4b6\" (UniqueName: \"kubernetes.io/projected/ce1b6238-9a41-4472-accc-e4d7d6371357-kube-api-access-ww4b6\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.185807 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce1b6238-9a41-4472-accc-e4d7d6371357-trusted-ca\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.213527 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmh4c\" (UniqueName: \"kubernetes.io/projected/90a4381e-451b-4940-932a-efba1d101c81-kube-api-access-bmh4c\") pod \"machine-approver-56656f9798-s2g88\" (UID: \"90a4381e-451b-4940-932a-efba1d101c81\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2g88" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.248082 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n4bqp" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.249452 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-nt9qq"] Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.250942 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq4ph"] Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.254350 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-h6jgn"] Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.255434 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-zjf9d"] Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.259326 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dmsjj"] Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.296386 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:40 crc kubenswrapper[4947]: E0125 00:11:40.297022 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:40.796978213 +0000 UTC m=+140.029968683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297181 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4619229-a3d7-401d-92d8-b1195e6e08f8-config-volume\") pod \"collect-profiles-29488320-2ns8x\" (UID: \"c4619229-a3d7-401d-92d8-b1195e6e08f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297251 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ce1b6238-9a41-4472-accc-e4d7d6371357-registry-tls\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297275 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tz9k4\" (UID: \"8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tz9k4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297294 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nm6h\" (UniqueName: \"kubernetes.io/projected/c4619229-a3d7-401d-92d8-b1195e6e08f8-kube-api-access-5nm6h\") pod \"collect-profiles-29488320-2ns8x\" (UID: \"c4619229-a3d7-401d-92d8-b1195e6e08f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297314 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f441771b-d1ad-442b-b344-e321cd553fbc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-4qcsn\" (UID: \"f441771b-d1ad-442b-b344-e321cd553fbc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qcsn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297357 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297409 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4619229-a3d7-401d-92d8-b1195e6e08f8-secret-volume\") pod \"collect-profiles-29488320-2ns8x\" (UID: \"c4619229-a3d7-401d-92d8-b1195e6e08f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297444 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkgt4\" (UniqueName: \"kubernetes.io/projected/8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf-kube-api-access-lkgt4\") pod \"machine-config-controller-84d6567774-tz9k4\" (UID: \"8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tz9k4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297459 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf-proxy-tls\") pod \"machine-config-controller-84d6567774-tz9k4\" (UID: \"8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tz9k4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297480 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-797s5\" (UniqueName: \"kubernetes.io/projected/ba74a9d5-0b44-4599-ac43-d117394771b0-kube-api-access-797s5\") pod \"catalog-operator-68c6474976-546b8\" (UID: \"ba74a9d5-0b44-4599-ac43-d117394771b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297494 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ba74a9d5-0b44-4599-ac43-d117394771b0-profile-collector-cert\") pod \"catalog-operator-68c6474976-546b8\" (UID: \"ba74a9d5-0b44-4599-ac43-d117394771b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297519 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fa35d682-53d6-4191-9cf9-f48b9f74e858-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vx9fn\" (UID: \"fa35d682-53d6-4191-9cf9-f48b9f74e858\") " pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297536 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww4b6\" (UniqueName: \"kubernetes.io/projected/ce1b6238-9a41-4472-accc-e4d7d6371357-kube-api-access-ww4b6\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297553 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f441771b-d1ad-442b-b344-e321cd553fbc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-4qcsn\" (UID: \"f441771b-d1ad-442b-b344-e321cd553fbc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qcsn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297572 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vx2f\" (UniqueName: \"kubernetes.io/projected/f441771b-d1ad-442b-b344-e321cd553fbc-kube-api-access-6vx2f\") pod \"kube-storage-version-migrator-operator-b67b599dd-4qcsn\" (UID: \"f441771b-d1ad-442b-b344-e321cd553fbc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qcsn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297588 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m8vg\" (UniqueName: \"kubernetes.io/projected/caf7d2fa-5195-4e91-b838-a33c9e281dc1-kube-api-access-4m8vg\") pod \"migrator-59844c95c7-cxmmw\" (UID: \"caf7d2fa-5195-4e91-b838-a33c9e281dc1\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cxmmw" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297607 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce1b6238-9a41-4472-accc-e4d7d6371357-trusted-ca\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297626 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ba74a9d5-0b44-4599-ac43-d117394771b0-srv-cert\") pod \"catalog-operator-68c6474976-546b8\" (UID: \"ba74a9d5-0b44-4599-ac43-d117394771b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297661 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa35d682-53d6-4191-9cf9-f48b9f74e858-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vx9fn\" (UID: \"fa35d682-53d6-4191-9cf9-f48b9f74e858\") " pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297709 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxnl4\" (UniqueName: \"kubernetes.io/projected/fa35d682-53d6-4191-9cf9-f48b9f74e858-kube-api-access-sxnl4\") pod \"marketplace-operator-79b997595-vx9fn\" (UID: \"fa35d682-53d6-4191-9cf9-f48b9f74e858\") " pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297758 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ce1b6238-9a41-4472-accc-e4d7d6371357-bound-sa-token\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297800 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ce1b6238-9a41-4472-accc-e4d7d6371357-registry-certificates\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: E0125 00:11:40.298357 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:40.798345029 +0000 UTC m=+140.031335459 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.298917 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ce1b6238-9a41-4472-accc-e4d7d6371357-installation-pull-secrets\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.298993 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ce1b6238-9a41-4472-accc-e4d7d6371357-ca-trust-extracted\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.299843 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce1b6238-9a41-4472-accc-e4d7d6371357-trusted-ca\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.300278 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ce1b6238-9a41-4472-accc-e4d7d6371357-ca-trust-extracted\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.301779 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ce1b6238-9a41-4472-accc-e4d7d6371357-registry-certificates\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.306983 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ce1b6238-9a41-4472-accc-e4d7d6371357-registry-tls\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.309288 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ce1b6238-9a41-4472-accc-e4d7d6371357-installation-pull-secrets\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.322728 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww4b6\" (UniqueName: \"kubernetes.io/projected/ce1b6238-9a41-4472-accc-e4d7d6371357-kube-api-access-ww4b6\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: W0125 00:11:40.342576 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf8b174c_d1eb_4a2d_88c2_113302fa2300.slice/crio-f13d4eaecedc3ea4b8275ea3df5dd2a15fe99a309dc244847fef98f9fa8c5fe1 WatchSource:0}: Error finding container f13d4eaecedc3ea4b8275ea3df5dd2a15fe99a309dc244847fef98f9fa8c5fe1: Status 404 returned error can't find the container with id f13d4eaecedc3ea4b8275ea3df5dd2a15fe99a309dc244847fef98f9fa8c5fe1 Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.342922 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ce1b6238-9a41-4472-accc-e4d7d6371357-bound-sa-token\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: W0125 00:11:40.353447 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3a733c1_a1cf_42ef_a056_27185292354f.slice/crio-9fddec1b4c50133c39595d5ae85373dbb93cca3db14bf1f44dacfede0073d88d WatchSource:0}: Error finding container 9fddec1b4c50133c39595d5ae85373dbb93cca3db14bf1f44dacfede0073d88d: Status 404 returned error can't find the container with id 9fddec1b4c50133c39595d5ae85373dbb93cca3db14bf1f44dacfede0073d88d Jan 25 00:11:40 crc kubenswrapper[4947]: W0125 00:11:40.355446 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8f2f610_05dc_49ea_882e_634d283b3caa.slice/crio-70796f6459b96427a71a3610eb2dfdfd1b263546ab86ca924c1f2e01cde24bc8 WatchSource:0}: Error finding container 70796f6459b96427a71a3610eb2dfdfd1b263546ab86ca924c1f2e01cde24bc8: Status 404 returned error can't find the container with id 70796f6459b96427a71a3610eb2dfdfd1b263546ab86ca924c1f2e01cde24bc8 Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.391369 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z"] Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.404094 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:40 crc kubenswrapper[4947]: E0125 00:11:40.404224 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:40.904202479 +0000 UTC m=+140.137192919 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.404487 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f-registration-dir\") pod \"csi-hostpathplugin-pjjgh\" (UID: \"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f\") " pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.404554 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4619229-a3d7-401d-92d8-b1195e6e08f8-config-volume\") pod \"collect-profiles-29488320-2ns8x\" (UID: \"c4619229-a3d7-401d-92d8-b1195e6e08f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.404582 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f-socket-dir\") pod \"csi-hostpathplugin-pjjgh\" (UID: \"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f\") " pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.405770 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4619229-a3d7-401d-92d8-b1195e6e08f8-config-volume\") pod \"collect-profiles-29488320-2ns8x\" (UID: \"c4619229-a3d7-401d-92d8-b1195e6e08f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.405821 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9252\" (UniqueName: \"kubernetes.io/projected/4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f-kube-api-access-k9252\") pod \"csi-hostpathplugin-pjjgh\" (UID: \"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f\") " pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.405859 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-989fv\" (UniqueName: \"kubernetes.io/projected/8fc6ed9e-cdc4-4908-8cd6-c75a12d1f261-kube-api-access-989fv\") pod \"ingress-canary-qgqfk\" (UID: \"8fc6ed9e-cdc4-4908-8cd6-c75a12d1f261\") " pod="openshift-ingress-canary/ingress-canary-qgqfk" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.406019 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tz9k4\" (UID: \"8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tz9k4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.406085 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nm6h\" (UniqueName: \"kubernetes.io/projected/c4619229-a3d7-401d-92d8-b1195e6e08f8-kube-api-access-5nm6h\") pod \"collect-profiles-29488320-2ns8x\" (UID: \"c4619229-a3d7-401d-92d8-b1195e6e08f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.406139 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f441771b-d1ad-442b-b344-e321cd553fbc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-4qcsn\" (UID: \"f441771b-d1ad-442b-b344-e321cd553fbc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qcsn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.406707 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d8ad697f-6550-4e6a-80ab-e877a5d6e2cf-node-bootstrap-token\") pod \"machine-config-server-wfcjp\" (UID: \"d8ad697f-6550-4e6a-80ab-e877a5d6e2cf\") " pod="openshift-machine-config-operator/machine-config-server-wfcjp" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.406848 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.406872 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ddmc\" (UniqueName: \"kubernetes.io/projected/5a4acfb5-2387-48ae-8c78-9d8ab4d96628-kube-api-access-9ddmc\") pod \"dns-default-5s2mh\" (UID: \"5a4acfb5-2387-48ae-8c78-9d8ab4d96628\") " pod="openshift-dns/dns-default-5s2mh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.406933 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8fc6ed9e-cdc4-4908-8cd6-c75a12d1f261-cert\") pod \"ingress-canary-qgqfk\" (UID: \"8fc6ed9e-cdc4-4908-8cd6-c75a12d1f261\") " pod="openshift-ingress-canary/ingress-canary-qgqfk" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.406973 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-292sf\" (UniqueName: \"kubernetes.io/projected/d8ad697f-6550-4e6a-80ab-e877a5d6e2cf-kube-api-access-292sf\") pod \"machine-config-server-wfcjp\" (UID: \"d8ad697f-6550-4e6a-80ab-e877a5d6e2cf\") " pod="openshift-machine-config-operator/machine-config-server-wfcjp" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.407005 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4619229-a3d7-401d-92d8-b1195e6e08f8-secret-volume\") pod \"collect-profiles-29488320-2ns8x\" (UID: \"c4619229-a3d7-401d-92d8-b1195e6e08f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.407025 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f-csi-data-dir\") pod \"csi-hostpathplugin-pjjgh\" (UID: \"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f\") " pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.407043 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjqcj\" (UniqueName: \"kubernetes.io/projected/086f5a4b-235b-41a4-8bf6-75dd0626ba9e-kube-api-access-hjqcj\") pod \"multus-admission-controller-857f4d67dd-k7fhc\" (UID: \"086f5a4b-235b-41a4-8bf6-75dd0626ba9e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-k7fhc" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.407099 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5a4acfb5-2387-48ae-8c78-9d8ab4d96628-metrics-tls\") pod \"dns-default-5s2mh\" (UID: \"5a4acfb5-2387-48ae-8c78-9d8ab4d96628\") " pod="openshift-dns/dns-default-5s2mh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.407176 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkgt4\" (UniqueName: \"kubernetes.io/projected/8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf-kube-api-access-lkgt4\") pod \"machine-config-controller-84d6567774-tz9k4\" (UID: \"8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tz9k4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.407213 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-797s5\" (UniqueName: \"kubernetes.io/projected/ba74a9d5-0b44-4599-ac43-d117394771b0-kube-api-access-797s5\") pod \"catalog-operator-68c6474976-546b8\" (UID: \"ba74a9d5-0b44-4599-ac43-d117394771b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.407262 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf-proxy-tls\") pod \"machine-config-controller-84d6567774-tz9k4\" (UID: \"8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tz9k4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.407282 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d8ad697f-6550-4e6a-80ab-e877a5d6e2cf-certs\") pod \"machine-config-server-wfcjp\" (UID: \"d8ad697f-6550-4e6a-80ab-e877a5d6e2cf\") " pod="openshift-machine-config-operator/machine-config-server-wfcjp" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.407341 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ba74a9d5-0b44-4599-ac43-d117394771b0-profile-collector-cert\") pod \"catalog-operator-68c6474976-546b8\" (UID: \"ba74a9d5-0b44-4599-ac43-d117394771b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.407374 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fa35d682-53d6-4191-9cf9-f48b9f74e858-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vx9fn\" (UID: \"fa35d682-53d6-4191-9cf9-f48b9f74e858\") " pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.407423 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f441771b-d1ad-442b-b344-e321cd553fbc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-4qcsn\" (UID: \"f441771b-d1ad-442b-b344-e321cd553fbc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qcsn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.407457 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vx2f\" (UniqueName: \"kubernetes.io/projected/f441771b-d1ad-442b-b344-e321cd553fbc-kube-api-access-6vx2f\") pod \"kube-storage-version-migrator-operator-b67b599dd-4qcsn\" (UID: \"f441771b-d1ad-442b-b344-e321cd553fbc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qcsn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.407525 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m8vg\" (UniqueName: \"kubernetes.io/projected/caf7d2fa-5195-4e91-b838-a33c9e281dc1-kube-api-access-4m8vg\") pod \"migrator-59844c95c7-cxmmw\" (UID: \"caf7d2fa-5195-4e91-b838-a33c9e281dc1\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cxmmw" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.407634 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f-mountpoint-dir\") pod \"csi-hostpathplugin-pjjgh\" (UID: \"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f\") " pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.407668 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ba74a9d5-0b44-4599-ac43-d117394771b0-srv-cert\") pod \"catalog-operator-68c6474976-546b8\" (UID: \"ba74a9d5-0b44-4599-ac43-d117394771b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.407747 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa35d682-53d6-4191-9cf9-f48b9f74e858-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vx9fn\" (UID: \"fa35d682-53d6-4191-9cf9-f48b9f74e858\") " pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.407789 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxnl4\" (UniqueName: \"kubernetes.io/projected/fa35d682-53d6-4191-9cf9-f48b9f74e858-kube-api-access-sxnl4\") pod \"marketplace-operator-79b997595-vx9fn\" (UID: \"fa35d682-53d6-4191-9cf9-f48b9f74e858\") " pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.407910 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a4acfb5-2387-48ae-8c78-9d8ab4d96628-config-volume\") pod \"dns-default-5s2mh\" (UID: \"5a4acfb5-2387-48ae-8c78-9d8ab4d96628\") " pod="openshift-dns/dns-default-5s2mh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.407953 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/086f5a4b-235b-41a4-8bf6-75dd0626ba9e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-k7fhc\" (UID: \"086f5a4b-235b-41a4-8bf6-75dd0626ba9e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-k7fhc" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.408080 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f-plugins-dir\") pod \"csi-hostpathplugin-pjjgh\" (UID: \"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f\") " pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:40 crc kubenswrapper[4947]: E0125 00:11:40.409514 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:40.909498739 +0000 UTC m=+140.142489369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.410536 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f441771b-d1ad-442b-b344-e321cd553fbc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-4qcsn\" (UID: \"f441771b-d1ad-442b-b344-e321cd553fbc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qcsn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.412066 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tz9k4\" (UID: \"8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tz9k4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.413994 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa35d682-53d6-4191-9cf9-f48b9f74e858-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vx9fn\" (UID: \"fa35d682-53d6-4191-9cf9-f48b9f74e858\") " pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.415026 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fa35d682-53d6-4191-9cf9-f48b9f74e858-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vx9fn\" (UID: \"fa35d682-53d6-4191-9cf9-f48b9f74e858\") " pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.417424 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f441771b-d1ad-442b-b344-e321cd553fbc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-4qcsn\" (UID: \"f441771b-d1ad-442b-b344-e321cd553fbc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qcsn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.417647 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2g88" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.418735 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf-proxy-tls\") pod \"machine-config-controller-84d6567774-tz9k4\" (UID: \"8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tz9k4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.419223 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ba74a9d5-0b44-4599-ac43-d117394771b0-profile-collector-cert\") pod \"catalog-operator-68c6474976-546b8\" (UID: \"ba74a9d5-0b44-4599-ac43-d117394771b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.419669 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ba74a9d5-0b44-4599-ac43-d117394771b0-srv-cert\") pod \"catalog-operator-68c6474976-546b8\" (UID: \"ba74a9d5-0b44-4599-ac43-d117394771b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.419685 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4619229-a3d7-401d-92d8-b1195e6e08f8-secret-volume\") pod \"collect-profiles-29488320-2ns8x\" (UID: \"c4619229-a3d7-401d-92d8-b1195e6e08f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.439888 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkgt4\" (UniqueName: \"kubernetes.io/projected/8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf-kube-api-access-lkgt4\") pod \"machine-config-controller-84d6567774-tz9k4\" (UID: \"8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tz9k4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.460049 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vx2f\" (UniqueName: \"kubernetes.io/projected/f441771b-d1ad-442b-b344-e321cd553fbc-kube-api-access-6vx2f\") pod \"kube-storage-version-migrator-operator-b67b599dd-4qcsn\" (UID: \"f441771b-d1ad-442b-b344-e321cd553fbc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qcsn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.487304 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-797s5\" (UniqueName: \"kubernetes.io/projected/ba74a9d5-0b44-4599-ac43-d117394771b0-kube-api-access-797s5\") pod \"catalog-operator-68c6474976-546b8\" (UID: \"ba74a9d5-0b44-4599-ac43-d117394771b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.487968 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tz9k4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.489876 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qcsn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.500057 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxnl4\" (UniqueName: \"kubernetes.io/projected/fa35d682-53d6-4191-9cf9-f48b9f74e858-kube-api-access-sxnl4\") pod \"marketplace-operator-79b997595-vx9fn\" (UID: \"fa35d682-53d6-4191-9cf9-f48b9f74e858\") " pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.508503 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.508780 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f-plugins-dir\") pod \"csi-hostpathplugin-pjjgh\" (UID: \"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f\") " pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.508841 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f-registration-dir\") pod \"csi-hostpathplugin-pjjgh\" (UID: \"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f\") " pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.508873 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f-socket-dir\") pod \"csi-hostpathplugin-pjjgh\" (UID: \"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f\") " pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.508896 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9252\" (UniqueName: \"kubernetes.io/projected/4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f-kube-api-access-k9252\") pod \"csi-hostpathplugin-pjjgh\" (UID: \"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f\") " pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.508918 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-989fv\" (UniqueName: \"kubernetes.io/projected/8fc6ed9e-cdc4-4908-8cd6-c75a12d1f261-kube-api-access-989fv\") pod \"ingress-canary-qgqfk\" (UID: \"8fc6ed9e-cdc4-4908-8cd6-c75a12d1f261\") " pod="openshift-ingress-canary/ingress-canary-qgqfk" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.508972 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d8ad697f-6550-4e6a-80ab-e877a5d6e2cf-node-bootstrap-token\") pod \"machine-config-server-wfcjp\" (UID: \"d8ad697f-6550-4e6a-80ab-e877a5d6e2cf\") " pod="openshift-machine-config-operator/machine-config-server-wfcjp" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.509008 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ddmc\" (UniqueName: \"kubernetes.io/projected/5a4acfb5-2387-48ae-8c78-9d8ab4d96628-kube-api-access-9ddmc\") pod \"dns-default-5s2mh\" (UID: \"5a4acfb5-2387-48ae-8c78-9d8ab4d96628\") " pod="openshift-dns/dns-default-5s2mh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.509041 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8fc6ed9e-cdc4-4908-8cd6-c75a12d1f261-cert\") pod \"ingress-canary-qgqfk\" (UID: \"8fc6ed9e-cdc4-4908-8cd6-c75a12d1f261\") " pod="openshift-ingress-canary/ingress-canary-qgqfk" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.509068 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f-csi-data-dir\") pod \"csi-hostpathplugin-pjjgh\" (UID: \"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f\") " pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.509092 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjqcj\" (UniqueName: \"kubernetes.io/projected/086f5a4b-235b-41a4-8bf6-75dd0626ba9e-kube-api-access-hjqcj\") pod \"multus-admission-controller-857f4d67dd-k7fhc\" (UID: \"086f5a4b-235b-41a4-8bf6-75dd0626ba9e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-k7fhc" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.509116 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-292sf\" (UniqueName: \"kubernetes.io/projected/d8ad697f-6550-4e6a-80ab-e877a5d6e2cf-kube-api-access-292sf\") pod \"machine-config-server-wfcjp\" (UID: \"d8ad697f-6550-4e6a-80ab-e877a5d6e2cf\") " pod="openshift-machine-config-operator/machine-config-server-wfcjp" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.509160 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5a4acfb5-2387-48ae-8c78-9d8ab4d96628-metrics-tls\") pod \"dns-default-5s2mh\" (UID: \"5a4acfb5-2387-48ae-8c78-9d8ab4d96628\") " pod="openshift-dns/dns-default-5s2mh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.509184 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d8ad697f-6550-4e6a-80ab-e877a5d6e2cf-certs\") pod \"machine-config-server-wfcjp\" (UID: \"d8ad697f-6550-4e6a-80ab-e877a5d6e2cf\") " pod="openshift-machine-config-operator/machine-config-server-wfcjp" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.509227 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f-mountpoint-dir\") pod \"csi-hostpathplugin-pjjgh\" (UID: \"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f\") " pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.509269 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a4acfb5-2387-48ae-8c78-9d8ab4d96628-config-volume\") pod \"dns-default-5s2mh\" (UID: \"5a4acfb5-2387-48ae-8c78-9d8ab4d96628\") " pod="openshift-dns/dns-default-5s2mh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.509305 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/086f5a4b-235b-41a4-8bf6-75dd0626ba9e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-k7fhc\" (UID: \"086f5a4b-235b-41a4-8bf6-75dd0626ba9e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-k7fhc" Jan 25 00:11:40 crc kubenswrapper[4947]: E0125 00:11:40.509594 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:41.009575377 +0000 UTC m=+140.242565817 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.510054 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f-plugins-dir\") pod \"csi-hostpathplugin-pjjgh\" (UID: \"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f\") " pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.510110 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f-registration-dir\") pod \"csi-hostpathplugin-pjjgh\" (UID: \"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f\") " pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.510167 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f-socket-dir\") pod \"csi-hostpathplugin-pjjgh\" (UID: \"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f\") " pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.510567 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f-mountpoint-dir\") pod \"csi-hostpathplugin-pjjgh\" (UID: \"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f\") " pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.510667 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f-csi-data-dir\") pod \"csi-hostpathplugin-pjjgh\" (UID: \"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f\") " pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.511395 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a4acfb5-2387-48ae-8c78-9d8ab4d96628-config-volume\") pod \"dns-default-5s2mh\" (UID: \"5a4acfb5-2387-48ae-8c78-9d8ab4d96628\") " pod="openshift-dns/dns-default-5s2mh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.513852 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5a4acfb5-2387-48ae-8c78-9d8ab4d96628-metrics-tls\") pod \"dns-default-5s2mh\" (UID: \"5a4acfb5-2387-48ae-8c78-9d8ab4d96628\") " pod="openshift-dns/dns-default-5s2mh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.515183 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d8ad697f-6550-4e6a-80ab-e877a5d6e2cf-certs\") pod \"machine-config-server-wfcjp\" (UID: \"d8ad697f-6550-4e6a-80ab-e877a5d6e2cf\") " pod="openshift-machine-config-operator/machine-config-server-wfcjp" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.518849 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/086f5a4b-235b-41a4-8bf6-75dd0626ba9e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-k7fhc\" (UID: \"086f5a4b-235b-41a4-8bf6-75dd0626ba9e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-k7fhc" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.521796 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-95tmb"] Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.522537 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d8ad697f-6550-4e6a-80ab-e877a5d6e2cf-node-bootstrap-token\") pod \"machine-config-server-wfcjp\" (UID: \"d8ad697f-6550-4e6a-80ab-e877a5d6e2cf\") " pod="openshift-machine-config-operator/machine-config-server-wfcjp" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.524739 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8fc6ed9e-cdc4-4908-8cd6-c75a12d1f261-cert\") pod \"ingress-canary-qgqfk\" (UID: \"8fc6ed9e-cdc4-4908-8cd6-c75a12d1f261\") " pod="openshift-ingress-canary/ingress-canary-qgqfk" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.527496 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m8vg\" (UniqueName: \"kubernetes.io/projected/caf7d2fa-5195-4e91-b838-a33c9e281dc1-kube-api-access-4m8vg\") pod \"migrator-59844c95c7-cxmmw\" (UID: \"caf7d2fa-5195-4e91-b838-a33c9e281dc1\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cxmmw" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.562695 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nm6h\" (UniqueName: \"kubernetes.io/projected/c4619229-a3d7-401d-92d8-b1195e6e08f8-kube-api-access-5nm6h\") pod \"collect-profiles-29488320-2ns8x\" (UID: \"c4619229-a3d7-401d-92d8-b1195e6e08f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.569619 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbnnc"] Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.571869 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.580499 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ddmc\" (UniqueName: \"kubernetes.io/projected/5a4acfb5-2387-48ae-8c78-9d8ab4d96628-kube-api-access-9ddmc\") pod \"dns-default-5s2mh\" (UID: \"5a4acfb5-2387-48ae-8c78-9d8ab4d96628\") " pod="openshift-dns/dns-default-5s2mh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.601807 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5s2mh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.603362 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-292sf\" (UniqueName: \"kubernetes.io/projected/d8ad697f-6550-4e6a-80ab-e877a5d6e2cf-kube-api-access-292sf\") pod \"machine-config-server-wfcjp\" (UID: \"d8ad697f-6550-4e6a-80ab-e877a5d6e2cf\") " pod="openshift-machine-config-operator/machine-config-server-wfcjp" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.610066 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: E0125 00:11:40.610569 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:41.11053516 +0000 UTC m=+140.343525600 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.617623 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9252\" (UniqueName: \"kubernetes.io/projected/4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f-kube-api-access-k9252\") pod \"csi-hostpathplugin-pjjgh\" (UID: \"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f\") " pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.638233 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-989fv\" (UniqueName: \"kubernetes.io/projected/8fc6ed9e-cdc4-4908-8cd6-c75a12d1f261-kube-api-access-989fv\") pod \"ingress-canary-qgqfk\" (UID: \"8fc6ed9e-cdc4-4908-8cd6-c75a12d1f261\") " pod="openshift-ingress-canary/ingress-canary-qgqfk" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.638492 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.655495 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-wfcjp" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.674063 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjqcj\" (UniqueName: \"kubernetes.io/projected/086f5a4b-235b-41a4-8bf6-75dd0626ba9e-kube-api-access-hjqcj\") pod \"multus-admission-controller-857f4d67dd-k7fhc\" (UID: \"086f5a4b-235b-41a4-8bf6-75dd0626ba9e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-k7fhc" Jan 25 00:11:40 crc kubenswrapper[4947]: W0125 00:11:40.697675 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90a4381e_451b_4940_932a_efba1d101c81.slice/crio-8081867ca83d0d3dba7706ba48debc4ce6aad588973a48266a6d034a2ba0cc6d WatchSource:0}: Error finding container 8081867ca83d0d3dba7706ba48debc4ce6aad588973a48266a6d034a2ba0cc6d: Status 404 returned error can't find the container with id 8081867ca83d0d3dba7706ba48debc4ce6aad588973a48266a6d034a2ba0cc6d Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.716939 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:40 crc kubenswrapper[4947]: E0125 00:11:40.725482 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:41.225423229 +0000 UTC m=+140.458413669 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.725660 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: E0125 00:11:40.728079 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:41.228052137 +0000 UTC m=+140.461042567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.759886 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.776357 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cxmmw" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.799400 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2plqs"] Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.806809 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.828704 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:40 crc kubenswrapper[4947]: E0125 00:11:40.832763 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:41.332744417 +0000 UTC m=+140.565734857 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.849197 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-5zvdg"] Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.886611 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qgqfk" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.894289 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-k7fhc" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.933964 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: E0125 00:11:40.934293 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:41.434282455 +0000 UTC m=+140.667272895 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.037329 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:41 crc kubenswrapper[4947]: E0125 00:11:41.037865 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:41.537850956 +0000 UTC m=+140.770841396 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.057550 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t"] Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.078045 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8"] Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.080378 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" event={"ID":"bf8b174c-d1eb-4a2d-88c2-113302fa2300","Type":"ContainerStarted","Data":"e8d5de3cacf1e240be8237587d653ef39e1763036e3e7632b23368ebfdb7b23d"} Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.080409 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" event={"ID":"bf8b174c-d1eb-4a2d-88c2-113302fa2300","Type":"ContainerStarted","Data":"f13d4eaecedc3ea4b8275ea3df5dd2a15fe99a309dc244847fef98f9fa8c5fe1"} Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.085583 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wfcjp" event={"ID":"d8ad697f-6550-4e6a-80ab-e877a5d6e2cf","Type":"ContainerStarted","Data":"9ca8287e9c26ec0dfd3d73ca6519fc5e70777d2ad74263c7918c4242406f9b4e"} Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.086756 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-95tmb" event={"ID":"49c456f9-6cbf-4e3c-992a-8636357253ad","Type":"ContainerStarted","Data":"ed1637877ab587a3769bbb73e160f3ac9a77b12181696670448b35273a1631bd"} Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.102712 4947 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-vw66z container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.102760 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" podUID="37b7a00e-4def-4d1e-8333-94d15174223b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.139735 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:41 crc kubenswrapper[4947]: E0125 00:11:41.141717 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:41.641700774 +0000 UTC m=+140.874691214 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.155707 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.155750 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" event={"ID":"222d5540-6b86-404a-b787-ea6a6043206a","Type":"ContainerStarted","Data":"1f85c504916b079d2fb54d456f76353ad645c3fb25746f3cdf54373957e57ba0"} Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.155768 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq4ph" event={"ID":"16240ac3-819b-4e68-bca9-c97c94599fbb","Type":"ContainerStarted","Data":"bbedd4034703687c9b50b1c5783748e27c2d0500cbc6f6a796b4ead9976ebd8d"} Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.155787 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq4ph" event={"ID":"16240ac3-819b-4e68-bca9-c97c94599fbb","Type":"ContainerStarted","Data":"f5375ebf7f4d9710bc386ea637ae73d1ddd57c341da724adc88f86c20f1e629e"} Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.155833 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbnnc" event={"ID":"7e230409-6e68-4f7c-b0c3-3e55433b22c1","Type":"ContainerStarted","Data":"60e2a9246a0a5fe4602732b036d749d63160d445a75779df6146a329c452ffa3"} Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.155844 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5zvdg" event={"ID":"0e97ae5e-35ab-41e9-aa03-ad060bbbd676","Type":"ContainerStarted","Data":"f856a317cec43d04d6a06e385253dff11efde283f4574e6eae2741b027f1dd0e"} Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.155854 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" event={"ID":"37b7a00e-4def-4d1e-8333-94d15174223b","Type":"ContainerStarted","Data":"7403745f150d63c1345cf7e91fe3a3905bd0f9a92d392d4fa7f206d8e146d177"} Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.155864 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" event={"ID":"37b7a00e-4def-4d1e-8333-94d15174223b","Type":"ContainerStarted","Data":"f071273158cef43a1913f83b87e712c39d955ed385651512fb2fd76cd5e1e89d"} Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.155892 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" event={"ID":"d3a733c1-a1cf-42ef-a056-27185292354f","Type":"ContainerStarted","Data":"9fddec1b4c50133c39595d5ae85373dbb93cca3db14bf1f44dacfede0073d88d"} Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.155903 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5nscb" event={"ID":"2244349f-df5c-4813-a0e7-418a602f57b0","Type":"ContainerStarted","Data":"ef36734da29446e8d9b2f83db38b77267a16fdb6bd73f6217a93feeb69974ada"} Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.155914 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-zjf9d" event={"ID":"b50bda2b-e707-456e-af02-796b6d9a4cdf","Type":"ContainerStarted","Data":"f3f6cfce0cd0914f22fa73027204f70ae39cf141087dd5a03b2c06a180e391ce"} Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.155925 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-h6jgn" event={"ID":"b8f2f610-05dc-49ea-882e-634d283b3caa","Type":"ContainerStarted","Data":"14485fd06b20211136c1bbbf3ca34f895cd859f8399c47a0300c020d2ee57a9b"} Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.155934 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-h6jgn" event={"ID":"b8f2f610-05dc-49ea-882e-634d283b3caa","Type":"ContainerStarted","Data":"70796f6459b96427a71a3610eb2dfdfd1b263546ab86ca924c1f2e01cde24bc8"} Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.155943 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2g88" event={"ID":"90a4381e-451b-4940-932a-efba1d101c81","Type":"ContainerStarted","Data":"8081867ca83d0d3dba7706ba48debc4ce6aad588973a48266a6d034a2ba0cc6d"} Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.240334 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:41 crc kubenswrapper[4947]: E0125 00:11:41.240999 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:41.740982902 +0000 UTC m=+140.973973342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.342274 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:41 crc kubenswrapper[4947]: E0125 00:11:41.342842 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:41.842830588 +0000 UTC m=+141.075821028 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.451655 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:41 crc kubenswrapper[4947]: E0125 00:11:41.452042 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:41.952028137 +0000 UTC m=+141.185018577 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.553033 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:41 crc kubenswrapper[4947]: E0125 00:11:41.553327 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:42.053315628 +0000 UTC m=+141.286306068 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.636840 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-5nscb" podStartSLOduration=120.636826211 podStartE2EDuration="2m0.636826211s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:41.626789477 +0000 UTC m=+140.859779917" watchObservedRunningTime="2026-01-25 00:11:41.636826211 +0000 UTC m=+140.869816651" Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.640513 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq4ph" podStartSLOduration=120.640472016 podStartE2EDuration="2m0.640472016s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:41.636384589 +0000 UTC m=+140.869375029" watchObservedRunningTime="2026-01-25 00:11:41.640472016 +0000 UTC m=+140.873462486" Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.653619 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:41 crc kubenswrapper[4947]: E0125 00:11:41.654002 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:42.153983712 +0000 UTC m=+141.386974152 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.791649 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:41 crc kubenswrapper[4947]: E0125 00:11:41.792165 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:42.292148621 +0000 UTC m=+141.525139061 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.794077 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" podStartSLOduration=120.794059332 podStartE2EDuration="2m0.794059332s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:41.791398882 +0000 UTC m=+141.024389322" watchObservedRunningTime="2026-01-25 00:11:41.794059332 +0000 UTC m=+141.027049762" Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.816794 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v"] Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.840630 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" podStartSLOduration=119.840603054 podStartE2EDuration="1m59.840603054s" podCreationTimestamp="2026-01-25 00:09:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:41.835374996 +0000 UTC m=+141.068365436" watchObservedRunningTime="2026-01-25 00:11:41.840603054 +0000 UTC m=+141.073593494" Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.846017 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-spsw9"] Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.887864 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.893105 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:41 crc kubenswrapper[4947]: E0125 00:11:41.897398 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:42.397378746 +0000 UTC m=+141.630369176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.897917 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.000567 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:42 crc kubenswrapper[4947]: E0125 00:11:42.014393 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:42.514372649 +0000 UTC m=+141.747363089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.022756 4947 patch_prober.go:28] interesting pod/router-default-5444994796-5nscb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 25 00:11:42 crc kubenswrapper[4947]: [-]has-synced failed: reason withheld Jan 25 00:11:42 crc kubenswrapper[4947]: [+]process-running ok Jan 25 00:11:42 crc kubenswrapper[4947]: healthz check failed Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.022811 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5nscb" podUID="2244349f-df5c-4813-a0e7-418a602f57b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.107416 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:42 crc kubenswrapper[4947]: E0125 00:11:42.107927 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:42.607887146 +0000 UTC m=+141.840877586 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.108113 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:42 crc kubenswrapper[4947]: E0125 00:11:42.108638 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:42.608590024 +0000 UTC m=+141.841580464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.138832 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-spsw9" event={"ID":"60ffdf66-0472-4e1f-9ea6-869acc338d0e","Type":"ContainerStarted","Data":"285da055e2c500c80ae5fa04797fa960cb9225549accda7dd866c3858c4a310d"} Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.147354 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t" event={"ID":"8cba929a-19da-479b-b9fb-b4cffaaba4c2","Type":"ContainerStarted","Data":"11142abbc4087cf0856125625243d08d9a1f8c11e5c6dfa765096871226454ab"} Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.179245 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v" event={"ID":"633ed9d6-a859-48e4-a100-f8e5bb3d6cfd","Type":"ContainerStarted","Data":"d25566e3fd8885032b5a026027533f468cd438c216fc1a989681c6c296759918"} Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.186218 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-zjf9d" event={"ID":"b50bda2b-e707-456e-af02-796b6d9a4cdf","Type":"ContainerStarted","Data":"34c3bbc616f663d4b633400d99bde35312eef4d825a230ae1fe5a5ecc109aa10"} Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.199832 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8" event={"ID":"b6a491f6-3829-4c9d-88cb-a49864576106","Type":"ContainerStarted","Data":"661eabb87f9473752ade357791459fc582b07355ece06fe48f097a1b6de947ee"} Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.210691 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:42 crc kubenswrapper[4947]: E0125 00:11:42.211018 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:42.711001435 +0000 UTC m=+141.943991875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.227925 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-95tmb" event={"ID":"49c456f9-6cbf-4e3c-992a-8636357253ad","Type":"ContainerStarted","Data":"afaa14d9f4c5b6f7cfac73b5c2aa6efec5df81648ad324830ffab9f1d41d01af"} Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.248717 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-95tmb" podStartSLOduration=121.248676895 podStartE2EDuration="2m1.248676895s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:42.246947059 +0000 UTC m=+141.479937499" watchObservedRunningTime="2026-01-25 00:11:42.248676895 +0000 UTC m=+141.481684265" Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.263381 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz" event={"ID":"b5f3c960-a56e-4c0e-82da-c8a39167eb8b","Type":"ContainerStarted","Data":"35a32c22c8b7368e9288b40b6674d6db578e16ad1144c4ef60f720f913592f67"} Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.270866 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.272693 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-h6jgn" podStartSLOduration=121.272682715 podStartE2EDuration="2m1.272682715s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:42.271312749 +0000 UTC m=+141.504303199" watchObservedRunningTime="2026-01-25 00:11:42.272682715 +0000 UTC m=+141.505673155" Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.306750 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.317505 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:42 crc kubenswrapper[4947]: E0125 00:11:42.319459 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:42.819444373 +0000 UTC m=+142.052434813 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.326600 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cmngb"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.337979 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xwjmr"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.343723 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-glppw"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.352426 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29488320-jf979"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.370098 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vwtw5"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.370157 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.407667 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lz644"] Jan 25 00:11:42 crc kubenswrapper[4947]: W0125 00:11:42.411023 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e4f53a6_fcc3_4310_965d_9a5dda91080b.slice/crio-3ad4af98bee02e97732bce66ef98812c3ca3ae679230f9721b2db8e405f6defc WatchSource:0}: Error finding container 3ad4af98bee02e97732bce66ef98812c3ca3ae679230f9721b2db8e405f6defc: Status 404 returned error can't find the container with id 3ad4af98bee02e97732bce66ef98812c3ca3ae679230f9721b2db8e405f6defc Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.412967 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.413979 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2prvv"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.417449 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.417493 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25dw7"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.418983 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:42 crc kubenswrapper[4947]: E0125 00:11:42.419270 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:42.919256585 +0000 UTC m=+142.152247025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.429109 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7kcc9"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.442280 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5nql4"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.480447 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vx9fn"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.484026 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-cxmmw"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.487950 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5s2mh"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.506893 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.524185 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:42 crc kubenswrapper[4947]: E0125 00:11:42.524613 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:43.024591453 +0000 UTC m=+142.257581893 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:42 crc kubenswrapper[4947]: W0125 00:11:42.583871 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ec7126b_b0f9_4fff_a11f_76726ce4c4ff.slice/crio-d9d3aef62d4d263918f652131fa6967d8214cecb1ec212c08034ef4732825bbb WatchSource:0}: Error finding container d9d3aef62d4d263918f652131fa6967d8214cecb1ec212c08034ef4732825bbb: Status 404 returned error can't find the container with id d9d3aef62d4d263918f652131fa6967d8214cecb1ec212c08034ef4732825bbb Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.625435 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:42 crc kubenswrapper[4947]: E0125 00:11:42.626028 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:43.126005287 +0000 UTC m=+142.358995727 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.655996 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qcsn"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.665396 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-n4bqp"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.680382 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-k7fhc"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.690991 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tz9k4"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.695209 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pjjgh"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.726417 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:42 crc kubenswrapper[4947]: E0125 00:11:42.727149 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:43.227114013 +0000 UTC m=+142.460104513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.736351 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-qgqfk"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.828092 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:42 crc kubenswrapper[4947]: E0125 00:11:42.828547 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:43.328531788 +0000 UTC m=+142.561522228 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:42 crc kubenswrapper[4947]: W0125 00:11:42.837577 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod086f5a4b_235b_41a4_8bf6_75dd0626ba9e.slice/crio-4323424913b00ce8d99f2455bd14a6573c7eab964c062141654eeeb744599d56 WatchSource:0}: Error finding container 4323424913b00ce8d99f2455bd14a6573c7eab964c062141654eeeb744599d56: Status 404 returned error can't find the container with id 4323424913b00ce8d99f2455bd14a6573c7eab964c062141654eeeb744599d56 Jan 25 00:11:42 crc kubenswrapper[4947]: W0125 00:11:42.870629 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fc6ed9e_cdc4_4908_8cd6_c75a12d1f261.slice/crio-c368fb44ef17ba57b9d54859c7b301317ad8712c3f5aeac6a9ae3afc23e46c37 WatchSource:0}: Error finding container c368fb44ef17ba57b9d54859c7b301317ad8712c3f5aeac6a9ae3afc23e46c37: Status 404 returned error can't find the container with id c368fb44ef17ba57b9d54859c7b301317ad8712c3f5aeac6a9ae3afc23e46c37 Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.896861 4947 patch_prober.go:28] interesting pod/router-default-5444994796-5nscb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 25 00:11:42 crc kubenswrapper[4947]: [-]has-synced failed: reason withheld Jan 25 00:11:42 crc kubenswrapper[4947]: [+]process-running ok Jan 25 00:11:42 crc kubenswrapper[4947]: healthz check failed Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.896906 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5nscb" podUID="2244349f-df5c-4813-a0e7-418a602f57b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.931117 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:42 crc kubenswrapper[4947]: E0125 00:11:42.931693 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:43.431671626 +0000 UTC m=+142.664662066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.036865 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:43 crc kubenswrapper[4947]: E0125 00:11:43.037234 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:43.53721887 +0000 UTC m=+142.770209310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.138703 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:43 crc kubenswrapper[4947]: E0125 00:11:43.139462 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:43.639434935 +0000 UTC m=+142.872425375 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.240096 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:43 crc kubenswrapper[4947]: E0125 00:11:43.240839 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:43.740807238 +0000 UTC m=+142.973797678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.342771 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2g88" event={"ID":"90a4381e-451b-4940-932a-efba1d101c81","Type":"ContainerStarted","Data":"a36d33d7eff1fe5b9c921481f4aec9414cff6a93439985e41b2e91134f149afc"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.343532 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:43 crc kubenswrapper[4947]: E0125 00:11:43.343824 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:43.843814034 +0000 UTC m=+143.076804474 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.399245 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5zvdg" event={"ID":"0e97ae5e-35ab-41e9-aa03-ad060bbbd676","Type":"ContainerStarted","Data":"817ff4963afa0de4cafefe1e9fd6d7e500d10fc898b7b9e0a7cbb3dfb9773581"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.400876 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-5zvdg" Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.422482 4947 patch_prober.go:28] interesting pod/downloads-7954f5f757-5zvdg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.422525 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-5zvdg" podStartSLOduration=122.422510861 podStartE2EDuration="2m2.422510861s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:43.41867801 +0000 UTC m=+142.651668450" watchObservedRunningTime="2026-01-25 00:11:43.422510861 +0000 UTC m=+142.655501301" Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.422542 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5zvdg" podUID="0e97ae5e-35ab-41e9-aa03-ad060bbbd676" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.444163 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:43 crc kubenswrapper[4947]: E0125 00:11:43.454407 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:43.954388409 +0000 UTC m=+143.187378849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.468318 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-glppw" event={"ID":"1e8f132b-916b-4973-9873-5919cb12251c","Type":"ContainerStarted","Data":"be7e342f3e6d6598d889734cd4187a95011471b87ee92c05cd69ab47fb74ee54"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.470323 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" event={"ID":"d3a733c1-a1cf-42ef-a056-27185292354f","Type":"ContainerStarted","Data":"e90e55f834d170a9f2751b73e12c28d7d7c3fcc4793df05b84ce36f32d19cba4"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.472163 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.499774 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" event={"ID":"9d15a018-8297-4e45-8a62-afa89a267381","Type":"ContainerStarted","Data":"f8461de37c95d2f68aceb775dc7fe3de76c76f12ffef68dfcf5d69ee41e8f1e4"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.501623 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x" event={"ID":"c4619229-a3d7-401d-92d8-b1195e6e08f8","Type":"ContainerStarted","Data":"6f773596799deeb0c9b3c47a4dba49661e5faca82042d0d785f8b4d7e9bba2d2"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.504911 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.507496 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-lz644" event={"ID":"c04cc1eb-ec23-4876-afd1-f123c04cdc8a","Type":"ContainerStarted","Data":"89aa9541b64237d49d9bc55b5e74e337ff03346683acd2bc16a98156f6d5fe57"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.512437 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" event={"ID":"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f","Type":"ContainerStarted","Data":"d3660274792703781a26567ec3825efc2393b9fdf7a7bfa82de2ed330954e1da"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.516732 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" event={"ID":"4e8662e0-1de8-4371-8836-214a0394675c","Type":"ContainerStarted","Data":"ce3ed48265c3712ee5171989380cd46457db282a272ad3e4b900c3141a69cfa4"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.521336 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tz9k4" event={"ID":"8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf","Type":"ContainerStarted","Data":"be500b11f2cd414cadc977f7f28f2d93ff6ad18aba9179114a7d7b0e2a2b6c30"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.530496 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" podStartSLOduration=122.530432376 podStartE2EDuration="2m2.530432376s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:43.506242441 +0000 UTC m=+142.739232901" watchObservedRunningTime="2026-01-25 00:11:43.530432376 +0000 UTC m=+142.763422816" Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.530793 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" event={"ID":"fa35d682-53d6-4191-9cf9-f48b9f74e858","Type":"ContainerStarted","Data":"2742ff9aa7f3cbee3d8389c7f258cc4ce04fcb1e9943ebf713523dc12c66fb09"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.548675 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-zjf9d" event={"ID":"b50bda2b-e707-456e-af02-796b6d9a4cdf","Type":"ContainerStarted","Data":"571d7c89f2ff8bfac00aa8998ff9571361fa2e5332cb8ec273da69ee149b8245"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.557556 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x" podStartSLOduration=122.557539568 podStartE2EDuration="2m2.557539568s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:43.556320876 +0000 UTC m=+142.789311346" watchObservedRunningTime="2026-01-25 00:11:43.557539568 +0000 UTC m=+142.790530008" Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.586247 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:43 crc kubenswrapper[4947]: E0125 00:11:43.586991 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:44.086976501 +0000 UTC m=+143.319966941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.593580 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-zjf9d" podStartSLOduration=122.593565105 podStartE2EDuration="2m2.593565105s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:43.59145864 +0000 UTC m=+142.824449080" watchObservedRunningTime="2026-01-25 00:11:43.593565105 +0000 UTC m=+142.826555545" Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.607423 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s" event={"ID":"d41f4b03-bff1-4ba1-a54b-b3e78014ecb8","Type":"ContainerStarted","Data":"b14687a0e3244999df330ccd307ba7acf30a5c1eeff21a579306144b475ba875"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.628674 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wfcjp" event={"ID":"d8ad697f-6550-4e6a-80ab-e877a5d6e2cf","Type":"ContainerStarted","Data":"b5cba42e090732996c9ef825da7bbacbe888729641b141b7cb69f372f8556eb3"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.632928 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qcsn" event={"ID":"f441771b-d1ad-442b-b344-e321cd553fbc","Type":"ContainerStarted","Data":"3aa6854d99328d07f27b06bd8ddea80bd6297537355742f8b0e61d1c29092e6f"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.650309 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" event={"ID":"0fffe8f2-59b1-4215-809e-461bc8f5e386","Type":"ContainerStarted","Data":"acb1267a56efe44bfa2cb5992c2fcabc5a8bcbfcf18e59301eb07e360c1b2f9c"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.652971 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v" event={"ID":"633ed9d6-a859-48e4-a100-f8e5bb3d6cfd","Type":"ContainerStarted","Data":"37463940d19d8a83e4bd134f454542acb0204c8644b08b9ae011d12fbc46efcc"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.652995 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v" event={"ID":"633ed9d6-a859-48e4-a100-f8e5bb3d6cfd","Type":"ContainerStarted","Data":"3212d7fa51d2ce609b2adee39bb69b36b2894ec6d8bc82315a710df74b3ff72b"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.695410 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:43 crc kubenswrapper[4947]: E0125 00:11:43.697268 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:44.197253629 +0000 UTC m=+143.430244069 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.747043 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8" event={"ID":"ba74a9d5-0b44-4599-ac43-d117394771b0","Type":"ContainerStarted","Data":"9fa9b5f260a3ea017cbfafd5256d165def4e9e4a5e48b8e1a8146eede83ef7f3"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.747960 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8" Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.759109 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-wfcjp" podStartSLOduration=6.759093633 podStartE2EDuration="6.759093633s" podCreationTimestamp="2026-01-25 00:11:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:43.739314783 +0000 UTC m=+142.972305223" watchObservedRunningTime="2026-01-25 00:11:43.759093633 +0000 UTC m=+142.992084073" Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.806122 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:43 crc kubenswrapper[4947]: E0125 00:11:43.806403 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:44.306392376 +0000 UTC m=+143.539382816 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.833171 4947 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-546b8 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.833218 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8" podUID="ba74a9d5-0b44-4599-ac43-d117394771b0" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.833639 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-h6jgn" event={"ID":"b8f2f610-05dc-49ea-882e-634d283b3caa","Type":"ContainerStarted","Data":"fa173684fe2ea0de903aca7a66dcb95f0716e4d5290105ac3dc2d73458fe2028"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.835256 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v" podStartSLOduration=122.835238704 podStartE2EDuration="2m2.835238704s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:43.759489654 +0000 UTC m=+142.992480094" watchObservedRunningTime="2026-01-25 00:11:43.835238704 +0000 UTC m=+143.068229144" Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.850351 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2prvv" event={"ID":"4ec7126b-b0f9-4fff-a11f-76726ce4c4ff","Type":"ContainerStarted","Data":"d9d3aef62d4d263918f652131fa6967d8214cecb1ec212c08034ef4732825bbb"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.852696 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5s2mh" event={"ID":"5a4acfb5-2387-48ae-8c78-9d8ab4d96628","Type":"ContainerStarted","Data":"bd054f659c7a993f372cba8e90f34fe7069ef3f73f76798f32719317745eea54"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.856164 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" event={"ID":"222d5540-6b86-404a-b787-ea6a6043206a","Type":"ContainerStarted","Data":"4e18bb53f3e6e6047cbc3494806da0bab8e691cfa23ed21cd0af462e1ed9ad06"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.870172 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbnnc" event={"ID":"7e230409-6e68-4f7c-b0c3-3e55433b22c1","Type":"ContainerStarted","Data":"c212c05762707e9591a6f1a88f4b3d922776a03021787825cb91d5c7499a5fa9"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.870546 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbnnc" event={"ID":"7e230409-6e68-4f7c-b0c3-3e55433b22c1","Type":"ContainerStarted","Data":"b1a00459e0656c1b3f67c3ef541ca7ffe8700b8b1227a65cc882f2f402504a5c"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.871199 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbnnc" Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.873618 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-k7fhc" event={"ID":"086f5a4b-235b-41a4-8bf6-75dd0626ba9e","Type":"ContainerStarted","Data":"4323424913b00ce8d99f2455bd14a6573c7eab964c062141654eeeb744599d56"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.874524 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8" event={"ID":"b6a491f6-3829-4c9d-88cb-a49864576106","Type":"ContainerStarted","Data":"76abab8158db7b6e9d610264d5a7b58e562dbe5b03e63e0a9a327716a5f2eaff"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.875323 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8" Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.885024 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29488320-jf979" event={"ID":"9e4f53a6-fcc3-4310-965d-9a5dda91080b","Type":"ContainerStarted","Data":"3ad4af98bee02e97732bce66ef98812c3ca3ae679230f9721b2db8e405f6defc"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.889521 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n4bqp" event={"ID":"0e8ad493-9466-46d8-8307-13f24463f184","Type":"ContainerStarted","Data":"a9e32a740fc4bcaaf5be83c4412435dfc74afa6b38c157a1c647ec866181930d"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.892593 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-spsw9" event={"ID":"60ffdf66-0472-4e1f-9ea6-869acc338d0e","Type":"ContainerStarted","Data":"e504adc83ba76b0afa9e4c18829ab3f97bcd549a99543ac06da8bfb4ecf4306a"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.894187 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qgqfk" event={"ID":"8fc6ed9e-cdc4-4908-8cd6-c75a12d1f261","Type":"ContainerStarted","Data":"c368fb44ef17ba57b9d54859c7b301317ad8712c3f5aeac6a9ae3afc23e46c37"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.894394 4947 patch_prober.go:28] interesting pod/router-default-5444994796-5nscb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 25 00:11:43 crc kubenswrapper[4947]: [-]has-synced failed: reason withheld Jan 25 00:11:43 crc kubenswrapper[4947]: [+]process-running ok Jan 25 00:11:43 crc kubenswrapper[4947]: healthz check failed Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.894425 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5nscb" podUID="2244349f-df5c-4813-a0e7-418a602f57b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.897515 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25dw7" event={"ID":"79a96518-940a-4490-9067-9e2f873753f7","Type":"ContainerStarted","Data":"0a65a9919e2536dd4a6a79ecb90630098f9b2fc217ff19f7af2d3deecb3b3d47"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.899071 4947 generic.go:334] "Generic (PLEG): container finished" podID="8cba929a-19da-479b-b9fb-b4cffaaba4c2" containerID="bba35843ee0d47e58c55c48ef257f5ba722c99699e81c20da941c1ba367843a8" exitCode=0 Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.899149 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t" event={"ID":"8cba929a-19da-479b-b9fb-b4cffaaba4c2","Type":"ContainerDied","Data":"bba35843ee0d47e58c55c48ef257f5ba722c99699e81c20da941c1ba367843a8"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.905032 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-xwjmr" event={"ID":"4c613148-89dd-4904-b721-c90f6a0f89ba","Type":"ContainerStarted","Data":"ff924471f6e43e486cdf934200cc478d4e215254aa81e273cfafba9b5b6db46f"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.905101 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-xwjmr" event={"ID":"4c613148-89dd-4904-b721-c90f6a0f89ba","Type":"ContainerStarted","Data":"ddade357a51ab74504e3b9cf9bd47c6b00591cb9bf48a8e2213115c4f3485ab4"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.906717 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.906845 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-xwjmr" Jan 25 00:11:43 crc kubenswrapper[4947]: E0125 00:11:43.907996 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:44.407977115 +0000 UTC m=+143.640967555 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.908179 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:43 crc kubenswrapper[4947]: E0125 00:11:43.909559 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:44.409551776 +0000 UTC m=+143.642542216 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.911384 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cmngb" event={"ID":"eaa67d1d-92d7-41aa-b72f-aee9bca370fc","Type":"ContainerStarted","Data":"87d9340038e3992641e84348234a6387bb6972a3cd340dc91fd7ff3ea70d4c8d"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.911732 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cmngb" event={"ID":"eaa67d1d-92d7-41aa-b72f-aee9bca370fc","Type":"ContainerStarted","Data":"62512cdbb5578b70db3441fdea0f27b6ca3bfc9fc525fb22e4f43dd844e2c107"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.913829 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vwtw5" event={"ID":"f56c1338-08c8-47de-b24a-3aaf85e315f8","Type":"ContainerStarted","Data":"6fc89b77d4c3657ab517e644a87db8f5288c7317e1f50f648b903e80ac67446d"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.929394 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8" Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.930662 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8" podStartSLOduration=122.930590338 podStartE2EDuration="2m2.930590338s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:43.927384444 +0000 UTC m=+143.160374884" watchObservedRunningTime="2026-01-25 00:11:43.930590338 +0000 UTC m=+143.163580778" Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.937338 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cxmmw" event={"ID":"caf7d2fa-5195-4e91-b838-a33c9e281dc1","Type":"ContainerStarted","Data":"4574515e3c34916600fefdbf9e9ac6d941ebfb8b0d31c4be13694860ecfa3163"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.941737 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz" event={"ID":"b5f3c960-a56e-4c0e-82da-c8a39167eb8b","Type":"ContainerStarted","Data":"a1ab096e1532dd00840229892616b23918284cf8f61930b73b483876680a8e52"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.946580 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbnnc" podStartSLOduration=122.946565258 podStartE2EDuration="2m2.946565258s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:43.946229509 +0000 UTC m=+143.179219949" watchObservedRunningTime="2026-01-25 00:11:43.946565258 +0000 UTC m=+143.179555708" Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.948084 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" event={"ID":"17242dc8-e334-406d-ad0a-5dc9ecdf0d6a","Type":"ContainerStarted","Data":"9b602122214d3f4e660e27fdfc9bb5ede90dc621ecc3606f8ff397cf0c2605a7"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.992729 4947 patch_prober.go:28] interesting pod/console-operator-58897d9998-xwjmr container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.992785 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-xwjmr" podUID="4c613148-89dd-4904-b721-c90f6a0f89ba" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.017655 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:44 crc kubenswrapper[4947]: E0125 00:11:44.034667 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:44.534643592 +0000 UTC m=+143.767634032 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.038779 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-xwjmr" podStartSLOduration=123.03876612 podStartE2EDuration="2m3.03876612s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:43.994082976 +0000 UTC m=+143.227073436" watchObservedRunningTime="2026-01-25 00:11:44.03876612 +0000 UTC m=+143.271756560" Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.041054 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-spsw9" podStartSLOduration=123.04104586 podStartE2EDuration="2m3.04104586s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:44.037880876 +0000 UTC m=+143.270871316" watchObservedRunningTime="2026-01-25 00:11:44.04104586 +0000 UTC m=+143.274036300" Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.096653 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cmngb" podStartSLOduration=123.09663881 podStartE2EDuration="2m3.09663881s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:44.060949543 +0000 UTC m=+143.293939983" watchObservedRunningTime="2026-01-25 00:11:44.09663881 +0000 UTC m=+143.329629250" Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.097400 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" podStartSLOduration=123.0973956 podStartE2EDuration="2m3.0973956s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:44.09661977 +0000 UTC m=+143.329610210" watchObservedRunningTime="2026-01-25 00:11:44.0973956 +0000 UTC m=+143.330386050" Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.119789 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:44 crc kubenswrapper[4947]: E0125 00:11:44.123857 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:44.623842485 +0000 UTC m=+143.856832925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.127708 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8" podStartSLOduration=123.127695477 podStartE2EDuration="2m3.127695477s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:44.123174097 +0000 UTC m=+143.356164537" watchObservedRunningTime="2026-01-25 00:11:44.127695477 +0000 UTC m=+143.360685917" Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.181457 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29488320-jf979" podStartSLOduration=123.181440748 podStartE2EDuration="2m3.181440748s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:44.18039172 +0000 UTC m=+143.413382160" watchObservedRunningTime="2026-01-25 00:11:44.181440748 +0000 UTC m=+143.414431188" Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.208610 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vwtw5" podStartSLOduration=123.208594741 podStartE2EDuration="2m3.208594741s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:44.206416684 +0000 UTC m=+143.439407124" watchObservedRunningTime="2026-01-25 00:11:44.208594741 +0000 UTC m=+143.441585181" Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.221461 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:44 crc kubenswrapper[4947]: E0125 00:11:44.221638 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:44.721615724 +0000 UTC m=+143.954606164 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.221773 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:44 crc kubenswrapper[4947]: E0125 00:11:44.222056 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:44.722044695 +0000 UTC m=+143.955035135 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.275602 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" podStartSLOduration=123.275587191 podStartE2EDuration="2m3.275587191s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:44.274653787 +0000 UTC m=+143.507644257" watchObservedRunningTime="2026-01-25 00:11:44.275587191 +0000 UTC m=+143.508577621" Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.323900 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz" podStartSLOduration=123.32388721 podStartE2EDuration="2m3.32388721s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:44.323495609 +0000 UTC m=+143.556486049" watchObservedRunningTime="2026-01-25 00:11:44.32388721 +0000 UTC m=+143.556877650" Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.324010 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:44 crc kubenswrapper[4947]: E0125 00:11:44.324175 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:44.824163838 +0000 UTC m=+144.057154278 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.324373 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:44 crc kubenswrapper[4947]: E0125 00:11:44.324711 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:44.824697401 +0000 UTC m=+144.057687841 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.465906 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:44 crc kubenswrapper[4947]: E0125 00:11:44.466429 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:44.966398013 +0000 UTC m=+144.199388453 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.466553 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:44 crc kubenswrapper[4947]: E0125 00:11:44.467143 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:44.967043891 +0000 UTC m=+144.200034341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.569135 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:44 crc kubenswrapper[4947]: E0125 00:11:44.569514 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:45.069491872 +0000 UTC m=+144.302482312 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.670247 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:44 crc kubenswrapper[4947]: E0125 00:11:44.670668 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:45.170649379 +0000 UTC m=+144.403639859 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.824265 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:44 crc kubenswrapper[4947]: E0125 00:11:44.824570 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:45.324540492 +0000 UTC m=+144.557530932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.893094 4947 patch_prober.go:28] interesting pod/router-default-5444994796-5nscb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 25 00:11:44 crc kubenswrapper[4947]: [-]has-synced failed: reason withheld Jan 25 00:11:44 crc kubenswrapper[4947]: [+]process-running ok Jan 25 00:11:44 crc kubenswrapper[4947]: healthz check failed Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.893435 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5nscb" podUID="2244349f-df5c-4813-a0e7-418a602f57b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.925041 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:44 crc kubenswrapper[4947]: E0125 00:11:44.925357 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:45.42534678 +0000 UTC m=+144.658337220 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.017664 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29488320-jf979" event={"ID":"9e4f53a6-fcc3-4310-965d-9a5dda91080b","Type":"ContainerStarted","Data":"e3aa396695721797a7d088335e8a4442951eaec070e1d3920c891d8532000ff2"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.018840 4947 generic.go:334] "Generic (PLEG): container finished" podID="0fffe8f2-59b1-4215-809e-461bc8f5e386" containerID="aa9247348877ce1c25c5c915bc010ef8fba4e16be3c739dabfa7d9b3bcf5a876" exitCode=0 Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.018890 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" event={"ID":"0fffe8f2-59b1-4215-809e-461bc8f5e386","Type":"ContainerDied","Data":"aa9247348877ce1c25c5c915bc010ef8fba4e16be3c739dabfa7d9b3bcf5a876"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.070544 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:45 crc kubenswrapper[4947]: E0125 00:11:45.070916 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:45.570901884 +0000 UTC m=+144.803892324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.074103 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cxmmw" event={"ID":"caf7d2fa-5195-4e91-b838-a33c9e281dc1","Type":"ContainerStarted","Data":"b61abd12836d6b78abe3f1d40f7ad73108ea8b99b095eb1ce7f6596272f36914"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.074152 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cxmmw" event={"ID":"caf7d2fa-5195-4e91-b838-a33c9e281dc1","Type":"ContainerStarted","Data":"651788d7435f86de6cc26474f3b0eb31cd556bcfbdf54071473f5ceff21506a8"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.080061 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8" event={"ID":"ba74a9d5-0b44-4599-ac43-d117394771b0","Type":"ContainerStarted","Data":"df791de749873dfde1b96aec802b16cb72aad8b7fc655393f14361a7b9bd2a72"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.080115 4947 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-546b8 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.080167 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8" podUID="ba74a9d5-0b44-4599-ac43-d117394771b0" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.081939 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" event={"ID":"fa35d682-53d6-4191-9cf9-f48b9f74e858","Type":"ContainerStarted","Data":"7a4ab516c4e08d71314a18aa4d477dfc08e5a573c06117757b88ea461a13c369"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.082288 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.083080 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tz9k4" event={"ID":"8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf","Type":"ContainerStarted","Data":"2f718379bb37b4968bddb3d48c6e813e390b785a9b9b976ea9541f79813743aa"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.085197 4947 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vx9fn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.085247 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" podUID="fa35d682-53d6-4191-9cf9-f48b9f74e858" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.085540 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vwtw5" event={"ID":"f56c1338-08c8-47de-b24a-3aaf85e315f8","Type":"ContainerStarted","Data":"8bb502f70bc9b5879a3ceae7bc4dc7390875c64887290b3961962c53a6ffaf4b"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.086854 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25dw7" event={"ID":"79a96518-940a-4490-9067-9e2f873753f7","Type":"ContainerStarted","Data":"ac3597a1e97c5f544f9e4fb7afbdc3988a5d15b936cf402d62a5f0bd81cae5ca"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.088140 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5s2mh" event={"ID":"5a4acfb5-2387-48ae-8c78-9d8ab4d96628","Type":"ContainerStarted","Data":"7a02a813ed0d1d3e00931180064bcf1feb864d256e13210300a1fba9418d0375"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.094773 4947 generic.go:334] "Generic (PLEG): container finished" podID="4e8662e0-1de8-4371-8836-214a0394675c" containerID="945b5d99ec8ef011f994efaa385a05ae30e88e52774ce03b7d0b40048bc3cd23" exitCode=0 Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.137715 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s" event={"ID":"d41f4b03-bff1-4ba1-a54b-b3e78014ecb8","Type":"ContainerStarted","Data":"edad46cd3c992922d1e36be124fac6367ce23a82408755ef8bce0cf121846ba4"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.137760 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qgqfk" event={"ID":"8fc6ed9e-cdc4-4908-8cd6-c75a12d1f261","Type":"ContainerStarted","Data":"68a1188fe6718ac4f8d1fe08a6ea360e60a8e01a4c150f61299b7600035b1d3a"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.137791 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2g88" event={"ID":"90a4381e-451b-4940-932a-efba1d101c81","Type":"ContainerStarted","Data":"e2fe7f469fd36f00c9973d1856e6125ee1eb8c60edeb2ed458cb99b3a0e56f75"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.137802 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" event={"ID":"4e8662e0-1de8-4371-8836-214a0394675c","Type":"ContainerDied","Data":"945b5d99ec8ef011f994efaa385a05ae30e88e52774ce03b7d0b40048bc3cd23"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.139023 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cxmmw" podStartSLOduration=124.138988353 podStartE2EDuration="2m4.138988353s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:45.138404057 +0000 UTC m=+144.371394507" watchObservedRunningTime="2026-01-25 00:11:45.138988353 +0000 UTC m=+144.371978793" Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.140912 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t" event={"ID":"8cba929a-19da-479b-b9fb-b4cffaaba4c2","Type":"ContainerStarted","Data":"d0ffcab6aed733717da1105cb1f5f2acc696e7577c9ce55affa1ead7d8bae0b8"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.141364 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t" Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.145394 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" event={"ID":"9d15a018-8297-4e45-8a62-afa89a267381","Type":"ContainerStarted","Data":"b8157486549c7153c6f440e775308e1931565402eac4d25b57d51cfdfab72be2"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.145933 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.146917 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n4bqp" event={"ID":"0e8ad493-9466-46d8-8307-13f24463f184","Type":"ContainerStarted","Data":"82de40cdc7258ef4a95938968408c1821cc0386aba28e43807f2874eedec9e0f"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.148576 4947 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-5nql4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.148607 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" podUID="9d15a018-8297-4e45-8a62-afa89a267381" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.234715 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-lz644" event={"ID":"c04cc1eb-ec23-4876-afd1-f123c04cdc8a","Type":"ContainerStarted","Data":"9becd27dd782903e264ac8e0df2d7d45c200203a9c88f7c2640705bb3f570178"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.236001 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:45 crc kubenswrapper[4947]: E0125 00:11:45.236429 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:45.736415752 +0000 UTC m=+144.969406192 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.256500 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2g88" podStartSLOduration=124.256112129 podStartE2EDuration="2m4.256112129s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:45.252091784 +0000 UTC m=+144.485082224" watchObservedRunningTime="2026-01-25 00:11:45.256112129 +0000 UTC m=+144.489102569" Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.307666 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qcsn" event={"ID":"f441771b-d1ad-442b-b344-e321cd553fbc","Type":"ContainerStarted","Data":"43a3ec4eee507eddf68b8c231e7c2937e28aab82a5f76b60b422a597277e0f37"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.337331 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2prvv" event={"ID":"4ec7126b-b0f9-4fff-a11f-76726ce4c4ff","Type":"ContainerStarted","Data":"cf923c5bc1cc75a7b7770478388250fd8d944575a71819e41d4943e522c67e56"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.341142 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.345159 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:45 crc kubenswrapper[4947]: E0125 00:11:45.345825 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:45.845809836 +0000 UTC m=+145.078800276 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.369478 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" event={"ID":"17242dc8-e334-406d-ad0a-5dc9ecdf0d6a","Type":"ContainerStarted","Data":"2b986c9539b3428403b5cab5764473840d134a9afebfa773432083772ba7846f"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.370914 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.371258 4947 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-bg9x9 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" start-of-body= Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.371305 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" podUID="17242dc8-e334-406d-ad0a-5dc9ecdf0d6a" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.382582 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x" event={"ID":"c4619229-a3d7-401d-92d8-b1195e6e08f8","Type":"ContainerStarted","Data":"8662d31844c88f4f7cb8129058f27d8b5eb8e6f9cec8ed2ed8df612ca9afb9ab"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.412356 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-k7fhc" event={"ID":"086f5a4b-235b-41a4-8bf6-75dd0626ba9e","Type":"ContainerStarted","Data":"aacc8bbda00c8a5c90dd93db7018c32875ccf0f035ecaa48f367ac8563c58e37"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.418254 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-glppw" event={"ID":"1e8f132b-916b-4973-9873-5919cb12251c","Type":"ContainerStarted","Data":"d8db0765e6225bb7e0e3574a4b97b7b8717c519025624235f7bbd860a4c4aaa2"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.420365 4947 patch_prober.go:28] interesting pod/downloads-7954f5f757-5zvdg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.420400 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5zvdg" podUID="0e97ae5e-35ab-41e9-aa03-ad060bbbd676" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.420449 4947 patch_prober.go:28] interesting pod/console-operator-58897d9998-xwjmr container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.420463 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-xwjmr" podUID="4c613148-89dd-4904-b721-c90f6a0f89ba" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.449042 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:45 crc kubenswrapper[4947]: E0125 00:11:45.449383 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:45.949371397 +0000 UTC m=+145.182361837 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.565272 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:45 crc kubenswrapper[4947]: E0125 00:11:45.565475 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:46.065435955 +0000 UTC m=+145.298426425 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.566013 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:45 crc kubenswrapper[4947]: E0125 00:11:45.570411 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:46.070396876 +0000 UTC m=+145.303387316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.667621 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:45 crc kubenswrapper[4947]: E0125 00:11:45.668014 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:46.16799924 +0000 UTC m=+145.400989680 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.696040 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-qgqfk" podStartSLOduration=9.696023176 podStartE2EDuration="9.696023176s" podCreationTimestamp="2026-01-25 00:11:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:45.407151087 +0000 UTC m=+144.640141537" watchObservedRunningTime="2026-01-25 00:11:45.696023176 +0000 UTC m=+144.929013616" Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.696544 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" podStartSLOduration=124.696537559 podStartE2EDuration="2m4.696537559s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:45.694213708 +0000 UTC m=+144.927204148" watchObservedRunningTime="2026-01-25 00:11:45.696537559 +0000 UTC m=+144.929527999" Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.769627 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:45 crc kubenswrapper[4947]: E0125 00:11:45.770003 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:46.269987988 +0000 UTC m=+145.502978428 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.845667 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-glppw" podStartSLOduration=124.845631656 podStartE2EDuration="2m4.845631656s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:45.767777061 +0000 UTC m=+145.000767501" watchObservedRunningTime="2026-01-25 00:11:45.845631656 +0000 UTC m=+145.078622096" Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.878703 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:45 crc kubenswrapper[4947]: E0125 00:11:45.879030 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:46.379012322 +0000 UTC m=+145.612002762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.953658 4947 patch_prober.go:28] interesting pod/router-default-5444994796-5nscb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 25 00:11:45 crc kubenswrapper[4947]: [-]has-synced failed: reason withheld Jan 25 00:11:45 crc kubenswrapper[4947]: [+]process-running ok Jan 25 00:11:45 crc kubenswrapper[4947]: healthz check failed Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.953721 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5nscb" podUID="2244349f-df5c-4813-a0e7-418a602f57b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.954371 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2prvv" podStartSLOduration=124.954360162 podStartE2EDuration="2m4.954360162s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:45.848978724 +0000 UTC m=+145.081969164" watchObservedRunningTime="2026-01-25 00:11:45.954360162 +0000 UTC m=+145.187350602" Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.955787 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" podStartSLOduration=124.955781089 podStartE2EDuration="2m4.955781089s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:45.954606798 +0000 UTC m=+145.187597238" watchObservedRunningTime="2026-01-25 00:11:45.955781089 +0000 UTC m=+145.188771529" Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.980481 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:45 crc kubenswrapper[4947]: E0125 00:11:45.980990 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:46.480975871 +0000 UTC m=+145.713966311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.982011 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t" podStartSLOduration=124.981981888 podStartE2EDuration="2m4.981981888s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:45.981188396 +0000 UTC m=+145.214178846" watchObservedRunningTime="2026-01-25 00:11:45.981981888 +0000 UTC m=+145.214972328" Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.046346 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qcsn" podStartSLOduration=125.046322728 podStartE2EDuration="2m5.046322728s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:46.045066745 +0000 UTC m=+145.278057185" watchObservedRunningTime="2026-01-25 00:11:46.046322728 +0000 UTC m=+145.279313158" Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.048081 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-lz644" podStartSLOduration=124.048075224 podStartE2EDuration="2m4.048075224s" podCreationTimestamp="2026-01-25 00:09:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:46.00604869 +0000 UTC m=+145.239039140" watchObservedRunningTime="2026-01-25 00:11:46.048075224 +0000 UTC m=+145.281065664" Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.080233 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n4bqp" podStartSLOduration=124.080196318 podStartE2EDuration="2m4.080196318s" podCreationTimestamp="2026-01-25 00:09:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:46.077786544 +0000 UTC m=+145.310776974" watchObservedRunningTime="2026-01-25 00:11:46.080196318 +0000 UTC m=+145.313186758" Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.082472 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:46 crc kubenswrapper[4947]: E0125 00:11:46.082859 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:46.582843018 +0000 UTC m=+145.815833458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.184324 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:46 crc kubenswrapper[4947]: E0125 00:11:46.184718 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:46.684696903 +0000 UTC m=+145.917687333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.285909 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:46 crc kubenswrapper[4947]: E0125 00:11:46.286093 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:46.786066606 +0000 UTC m=+146.019057046 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.286439 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:46 crc kubenswrapper[4947]: E0125 00:11:46.286720 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:46.786707143 +0000 UTC m=+146.019697583 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.387845 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:46 crc kubenswrapper[4947]: E0125 00:11:46.394430 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:46.894395661 +0000 UTC m=+146.127386111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.425280 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" event={"ID":"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f","Type":"ContainerStarted","Data":"b3e5f53453dab5d4e2303e5b8afc3fd1a7832810cdd6e0b7d62a50f9d46fb231"} Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.427358 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25dw7" event={"ID":"79a96518-940a-4490-9067-9e2f873753f7","Type":"ContainerStarted","Data":"db38cb4a4d7183b227730f1217ae7ffc2f561b4b9d1c67d6a18c0122b7de4d0f"} Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.430246 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5s2mh" event={"ID":"5a4acfb5-2387-48ae-8c78-9d8ab4d96628","Type":"ContainerStarted","Data":"cc73d351af1970d9f2f3aaa1b0265e0ae023d438f6539cd7b47be1a1e865c301"} Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.430267 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-5s2mh" Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.431518 4947 generic.go:334] "Generic (PLEG): container finished" podID="c4619229-a3d7-401d-92d8-b1195e6e08f8" containerID="8662d31844c88f4f7cb8129058f27d8b5eb8e6f9cec8ed2ed8df612ca9afb9ab" exitCode=0 Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.431557 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x" event={"ID":"c4619229-a3d7-401d-92d8-b1195e6e08f8","Type":"ContainerDied","Data":"8662d31844c88f4f7cb8129058f27d8b5eb8e6f9cec8ed2ed8df612ca9afb9ab"} Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.433080 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-k7fhc" event={"ID":"086f5a4b-235b-41a4-8bf6-75dd0626ba9e","Type":"ContainerStarted","Data":"6612a04375af55a59f2ab05a3064359f1f2ab103103b4b144af242341c24090c"} Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.434914 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" event={"ID":"0fffe8f2-59b1-4215-809e-461bc8f5e386","Type":"ContainerStarted","Data":"48c249a7f0d3d14dedba9e40b40a2be4bb54267016202c6a467dd71e75d7d604"} Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.437293 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tz9k4" event={"ID":"8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf","Type":"ContainerStarted","Data":"bf53b13a34756f4dc417b424e2b53e4ad8b307158e97faabf42d675e77081c21"} Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.439359 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s" event={"ID":"d41f4b03-bff1-4ba1-a54b-b3e78014ecb8","Type":"ContainerStarted","Data":"28ca4e5473cf669d0f64248c6f52e4fd6ae63e28bacca4cb2c7166bafdb821dd"} Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.441961 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" event={"ID":"4e8662e0-1de8-4371-8836-214a0394675c","Type":"ContainerStarted","Data":"fc95af35ade002bdea64007c152d00e59af41ddbbe819254e011dcc2cf2862a3"} Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.441987 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" event={"ID":"4e8662e0-1de8-4371-8836-214a0394675c","Type":"ContainerStarted","Data":"efbb70e0a95a9a51cdf70bdea04cddec4723e28949bb8e6e11d15406c673058d"} Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.443452 4947 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vx9fn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.443533 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" podUID="fa35d682-53d6-4191-9cf9-f48b9f74e858" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.449680 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8" Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.463556 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.464385 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25dw7" podStartSLOduration=125.46436819 podStartE2EDuration="2m5.46436819s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:46.462955243 +0000 UTC m=+145.695945683" watchObservedRunningTime="2026-01-25 00:11:46.46436819 +0000 UTC m=+145.697358630" Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.517023 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:46 crc kubenswrapper[4947]: E0125 00:11:46.517337 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:47.017326221 +0000 UTC m=+146.250316661 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.517875 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" podStartSLOduration=125.517858626 podStartE2EDuration="2m5.517858626s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:46.516313485 +0000 UTC m=+145.749303925" watchObservedRunningTime="2026-01-25 00:11:46.517858626 +0000 UTC m=+145.750849066" Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.569508 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s" podStartSLOduration=125.569487792 podStartE2EDuration="2m5.569487792s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:46.568493416 +0000 UTC m=+145.801483876" watchObservedRunningTime="2026-01-25 00:11:46.569487792 +0000 UTC m=+145.802478232" Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.618408 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:46 crc kubenswrapper[4947]: E0125 00:11:46.619828 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:47.119808633 +0000 UTC m=+146.352799073 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.633174 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tz9k4" podStartSLOduration=125.633156634 podStartE2EDuration="2m5.633156634s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:46.622557605 +0000 UTC m=+145.855548045" watchObservedRunningTime="2026-01-25 00:11:46.633156634 +0000 UTC m=+145.866147074" Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.720805 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:46 crc kubenswrapper[4947]: E0125 00:11:46.721140 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:47.221109135 +0000 UTC m=+146.454099575 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.750325 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-k7fhc" podStartSLOduration=125.750305072 podStartE2EDuration="2m5.750305072s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:46.715238611 +0000 UTC m=+145.948229051" watchObservedRunningTime="2026-01-25 00:11:46.750305072 +0000 UTC m=+145.983295512" Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.786404 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" podStartSLOduration=124.786388569 podStartE2EDuration="2m4.786388569s" podCreationTimestamp="2026-01-25 00:09:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:46.78259364 +0000 UTC m=+146.015584080" watchObservedRunningTime="2026-01-25 00:11:46.786388569 +0000 UTC m=+146.019379009" Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.821536 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:46 crc kubenswrapper[4947]: E0125 00:11:46.822682 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:47.322660872 +0000 UTC m=+146.555651312 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.900424 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-5s2mh" podStartSLOduration=10.900407305 podStartE2EDuration="10.900407305s" podCreationTimestamp="2026-01-25 00:11:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:46.89946016 +0000 UTC m=+146.132450590" watchObservedRunningTime="2026-01-25 00:11:46.900407305 +0000 UTC m=+146.133397745" Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.901913 4947 patch_prober.go:28] interesting pod/router-default-5444994796-5nscb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 25 00:11:46 crc kubenswrapper[4947]: [-]has-synced failed: reason withheld Jan 25 00:11:46 crc kubenswrapper[4947]: [+]process-running ok Jan 25 00:11:46 crc kubenswrapper[4947]: healthz check failed Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.901966 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5nscb" podUID="2244349f-df5c-4813-a0e7-418a602f57b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.929440 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:46 crc kubenswrapper[4947]: E0125 00:11:46.929682 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:47.429669024 +0000 UTC m=+146.662659454 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.949257 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.019452 4947 csr.go:261] certificate signing request csr-6czf7 is approved, waiting to be issued Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.030380 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:47 crc kubenswrapper[4947]: E0125 00:11:47.030700 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:47.530686867 +0000 UTC m=+146.763677297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.036259 4947 csr.go:257] certificate signing request csr-6czf7 is issued Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.075288 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.075346 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.138417 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:47 crc kubenswrapper[4947]: E0125 00:11:47.138852 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:47.638829458 +0000 UTC m=+146.871819888 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.241756 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:47 crc kubenswrapper[4947]: E0125 00:11:47.242041 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:47.7420261 +0000 UTC m=+146.975016540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.342759 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:47 crc kubenswrapper[4947]: E0125 00:11:47.343156 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:47.843143236 +0000 UTC m=+147.076133666 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.443868 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:47 crc kubenswrapper[4947]: E0125 00:11:47.444042 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:47.944020976 +0000 UTC m=+147.177011416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.444108 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:47 crc kubenswrapper[4947]: E0125 00:11:47.444449 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:47.944436236 +0000 UTC m=+147.177426676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.454780 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" event={"ID":"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f","Type":"ContainerStarted","Data":"7a98a66fc45b9dc850d7ca4447dc28d96aff763e55d1f5cce1c1d7ab98ea1f65"} Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.454858 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" event={"ID":"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f","Type":"ContainerStarted","Data":"90114e1dd2eafccc01a2cae1ef4816eadf2f35dee5c4f27fe807d0c493c3c761"} Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.456698 4947 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vx9fn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.456745 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" podUID="fa35d682-53d6-4191-9cf9-f48b9f74e858" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.546051 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:47 crc kubenswrapper[4947]: E0125 00:11:47.547341 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:48.04732764 +0000 UTC m=+147.280318080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.646491 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.647873 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:47 crc kubenswrapper[4947]: E0125 00:11:47.648293 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:48.148280802 +0000 UTC m=+147.381271242 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.748569 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:47 crc kubenswrapper[4947]: E0125 00:11:47.748780 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:48.248736012 +0000 UTC m=+147.481726452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.749236 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:47 crc kubenswrapper[4947]: E0125 00:11:47.749566 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:48.249550982 +0000 UTC m=+147.482541422 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.850863 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:47 crc kubenswrapper[4947]: E0125 00:11:47.851169 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:48.35108556 +0000 UTC m=+147.584076010 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.851407 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:47 crc kubenswrapper[4947]: E0125 00:11:47.851765 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:48.351750647 +0000 UTC m=+147.584741087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.910637 4947 patch_prober.go:28] interesting pod/router-default-5444994796-5nscb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 25 00:11:47 crc kubenswrapper[4947]: [-]has-synced failed: reason withheld Jan 25 00:11:47 crc kubenswrapper[4947]: [+]process-running ok Jan 25 00:11:47 crc kubenswrapper[4947]: healthz check failed Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.910984 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5nscb" podUID="2244349f-df5c-4813-a0e7-418a602f57b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.936619 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qzj76"] Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.937511 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qzj76" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.939589 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.952825 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:47 crc kubenswrapper[4947]: E0125 00:11:47.953208 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:48.453175072 +0000 UTC m=+147.686165512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.954008 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxzk6\" (UniqueName: \"kubernetes.io/projected/900aeb01-050c-45b8-936c-e5f8d73ebeb5-kube-api-access-vxzk6\") pod \"certified-operators-qzj76\" (UID: \"900aeb01-050c-45b8-936c-e5f8d73ebeb5\") " pod="openshift-marketplace/certified-operators-qzj76" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.954191 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.954277 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.954545 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/900aeb01-050c-45b8-936c-e5f8d73ebeb5-utilities\") pod \"certified-operators-qzj76\" (UID: \"900aeb01-050c-45b8-936c-e5f8d73ebeb5\") " pod="openshift-marketplace/certified-operators-qzj76" Jan 25 00:11:47 crc kubenswrapper[4947]: E0125 00:11:47.954614 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:48.45460254 +0000 UTC m=+147.687593180 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.954284 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.954769 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.955002 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/900aeb01-050c-45b8-936c-e5f8d73ebeb5-catalog-content\") pod \"certified-operators-qzj76\" (UID: \"900aeb01-050c-45b8-936c-e5f8d73ebeb5\") " pod="openshift-marketplace/certified-operators-qzj76" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.955146 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.955263 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.962796 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.974806 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.974995 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.979173 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-47m2l"] Jan 25 00:11:47 crc kubenswrapper[4947]: E0125 00:11:47.979391 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4619229-a3d7-401d-92d8-b1195e6e08f8" containerName="collect-profiles" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.979411 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4619229-a3d7-401d-92d8-b1195e6e08f8" containerName="collect-profiles" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.979514 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4619229-a3d7-401d-92d8-b1195e6e08f8" containerName="collect-profiles" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.980327 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-47m2l" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.982673 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.013981 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.016806 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.031259 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nmrkd"] Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.032138 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmrkd" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.040197 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-25 00:06:47 +0000 UTC, rotation deadline is 2026-10-22 11:45:25.170161227 +0000 UTC Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.040232 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6491h33m37.129931868s for next certificate rotation Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.059224 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4619229-a3d7-401d-92d8-b1195e6e08f8-config-volume\") pod \"c4619229-a3d7-401d-92d8-b1195e6e08f8\" (UID: \"c4619229-a3d7-401d-92d8-b1195e6e08f8\") " Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.059462 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.059501 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4619229-a3d7-401d-92d8-b1195e6e08f8-secret-volume\") pod \"c4619229-a3d7-401d-92d8-b1195e6e08f8\" (UID: \"c4619229-a3d7-401d-92d8-b1195e6e08f8\") " Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.059548 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nm6h\" (UniqueName: \"kubernetes.io/projected/c4619229-a3d7-401d-92d8-b1195e6e08f8-kube-api-access-5nm6h\") pod \"c4619229-a3d7-401d-92d8-b1195e6e08f8\" (UID: \"c4619229-a3d7-401d-92d8-b1195e6e08f8\") " Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.059719 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxzk6\" (UniqueName: \"kubernetes.io/projected/900aeb01-050c-45b8-936c-e5f8d73ebeb5-kube-api-access-vxzk6\") pod \"certified-operators-qzj76\" (UID: \"900aeb01-050c-45b8-936c-e5f8d73ebeb5\") " pod="openshift-marketplace/certified-operators-qzj76" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.059762 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8631ec11-9ab2-4799-b57c-0a346ec69767-catalog-content\") pod \"certified-operators-nmrkd\" (UID: \"8631ec11-9ab2-4799-b57c-0a346ec69767\") " pod="openshift-marketplace/certified-operators-nmrkd" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.059802 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhdwv\" (UniqueName: \"kubernetes.io/projected/ad96bcad-395b-4844-9992-00acdf7436c2-kube-api-access-fhdwv\") pod \"community-operators-47m2l\" (UID: \"ad96bcad-395b-4844-9992-00acdf7436c2\") " pod="openshift-marketplace/community-operators-47m2l" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.059821 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/900aeb01-050c-45b8-936c-e5f8d73ebeb5-utilities\") pod \"certified-operators-qzj76\" (UID: \"900aeb01-050c-45b8-936c-e5f8d73ebeb5\") " pod="openshift-marketplace/certified-operators-qzj76" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.059843 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad96bcad-395b-4844-9992-00acdf7436c2-catalog-content\") pod \"community-operators-47m2l\" (UID: \"ad96bcad-395b-4844-9992-00acdf7436c2\") " pod="openshift-marketplace/community-operators-47m2l" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.059864 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfhzb\" (UniqueName: \"kubernetes.io/projected/8631ec11-9ab2-4799-b57c-0a346ec69767-kube-api-access-tfhzb\") pod \"certified-operators-nmrkd\" (UID: \"8631ec11-9ab2-4799-b57c-0a346ec69767\") " pod="openshift-marketplace/certified-operators-nmrkd" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.059884 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/900aeb01-050c-45b8-936c-e5f8d73ebeb5-catalog-content\") pod \"certified-operators-qzj76\" (UID: \"900aeb01-050c-45b8-936c-e5f8d73ebeb5\") " pod="openshift-marketplace/certified-operators-qzj76" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.059899 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8631ec11-9ab2-4799-b57c-0a346ec69767-utilities\") pod \"certified-operators-nmrkd\" (UID: \"8631ec11-9ab2-4799-b57c-0a346ec69767\") " pod="openshift-marketplace/certified-operators-nmrkd" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.059914 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad96bcad-395b-4844-9992-00acdf7436c2-utilities\") pod \"community-operators-47m2l\" (UID: \"ad96bcad-395b-4844-9992-00acdf7436c2\") " pod="openshift-marketplace/community-operators-47m2l" Jan 25 00:11:48 crc kubenswrapper[4947]: E0125 00:11:48.060030 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:48.560012739 +0000 UTC m=+147.793003179 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.060227 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4619229-a3d7-401d-92d8-b1195e6e08f8-config-volume" (OuterVolumeSpecName: "config-volume") pod "c4619229-a3d7-401d-92d8-b1195e6e08f8" (UID: "c4619229-a3d7-401d-92d8-b1195e6e08f8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.060696 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/900aeb01-050c-45b8-936c-e5f8d73ebeb5-utilities\") pod \"certified-operators-qzj76\" (UID: \"900aeb01-050c-45b8-936c-e5f8d73ebeb5\") " pod="openshift-marketplace/certified-operators-qzj76" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.062712 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/900aeb01-050c-45b8-936c-e5f8d73ebeb5-catalog-content\") pod \"certified-operators-qzj76\" (UID: \"900aeb01-050c-45b8-936c-e5f8d73ebeb5\") " pod="openshift-marketplace/certified-operators-qzj76" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.062849 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4619229-a3d7-401d-92d8-b1195e6e08f8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c4619229-a3d7-401d-92d8-b1195e6e08f8" (UID: "c4619229-a3d7-401d-92d8-b1195e6e08f8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.065397 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4619229-a3d7-401d-92d8-b1195e6e08f8-kube-api-access-5nm6h" (OuterVolumeSpecName: "kube-api-access-5nm6h") pod "c4619229-a3d7-401d-92d8-b1195e6e08f8" (UID: "c4619229-a3d7-401d-92d8-b1195e6e08f8"). InnerVolumeSpecName "kube-api-access-5nm6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.083250 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nmrkd"] Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.103036 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qzj76"] Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.122614 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-47m2l"] Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.163484 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxzk6\" (UniqueName: \"kubernetes.io/projected/900aeb01-050c-45b8-936c-e5f8d73ebeb5-kube-api-access-vxzk6\") pod \"certified-operators-qzj76\" (UID: \"900aeb01-050c-45b8-936c-e5f8d73ebeb5\") " pod="openshift-marketplace/certified-operators-qzj76" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.163873 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.164020 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8631ec11-9ab2-4799-b57c-0a346ec69767-catalog-content\") pod \"certified-operators-nmrkd\" (UID: \"8631ec11-9ab2-4799-b57c-0a346ec69767\") " pod="openshift-marketplace/certified-operators-nmrkd" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.164186 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhdwv\" (UniqueName: \"kubernetes.io/projected/ad96bcad-395b-4844-9992-00acdf7436c2-kube-api-access-fhdwv\") pod \"community-operators-47m2l\" (UID: \"ad96bcad-395b-4844-9992-00acdf7436c2\") " pod="openshift-marketplace/community-operators-47m2l" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.164298 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad96bcad-395b-4844-9992-00acdf7436c2-catalog-content\") pod \"community-operators-47m2l\" (UID: \"ad96bcad-395b-4844-9992-00acdf7436c2\") " pod="openshift-marketplace/community-operators-47m2l" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.164403 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfhzb\" (UniqueName: \"kubernetes.io/projected/8631ec11-9ab2-4799-b57c-0a346ec69767-kube-api-access-tfhzb\") pod \"certified-operators-nmrkd\" (UID: \"8631ec11-9ab2-4799-b57c-0a346ec69767\") " pod="openshift-marketplace/certified-operators-nmrkd" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.164504 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8631ec11-9ab2-4799-b57c-0a346ec69767-utilities\") pod \"certified-operators-nmrkd\" (UID: \"8631ec11-9ab2-4799-b57c-0a346ec69767\") " pod="openshift-marketplace/certified-operators-nmrkd" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.164606 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad96bcad-395b-4844-9992-00acdf7436c2-utilities\") pod \"community-operators-47m2l\" (UID: \"ad96bcad-395b-4844-9992-00acdf7436c2\") " pod="openshift-marketplace/community-operators-47m2l" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.164789 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nm6h\" (UniqueName: \"kubernetes.io/projected/c4619229-a3d7-401d-92d8-b1195e6e08f8-kube-api-access-5nm6h\") on node \"crc\" DevicePath \"\"" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.164877 4947 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4619229-a3d7-401d-92d8-b1195e6e08f8-config-volume\") on node \"crc\" DevicePath \"\"" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.164957 4947 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4619229-a3d7-401d-92d8-b1195e6e08f8-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.164852 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad96bcad-395b-4844-9992-00acdf7436c2-catalog-content\") pod \"community-operators-47m2l\" (UID: \"ad96bcad-395b-4844-9992-00acdf7436c2\") " pod="openshift-marketplace/community-operators-47m2l" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.165178 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8631ec11-9ab2-4799-b57c-0a346ec69767-catalog-content\") pod \"certified-operators-nmrkd\" (UID: \"8631ec11-9ab2-4799-b57c-0a346ec69767\") " pod="openshift-marketplace/certified-operators-nmrkd" Jan 25 00:11:48 crc kubenswrapper[4947]: E0125 00:11:48.165436 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:48.665416607 +0000 UTC m=+147.898407147 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.165765 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8631ec11-9ab2-4799-b57c-0a346ec69767-utilities\") pod \"certified-operators-nmrkd\" (UID: \"8631ec11-9ab2-4799-b57c-0a346ec69767\") " pod="openshift-marketplace/certified-operators-nmrkd" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.165953 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad96bcad-395b-4844-9992-00acdf7436c2-utilities\") pod \"community-operators-47m2l\" (UID: \"ad96bcad-395b-4844-9992-00acdf7436c2\") " pod="openshift-marketplace/community-operators-47m2l" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.217842 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfhzb\" (UniqueName: \"kubernetes.io/projected/8631ec11-9ab2-4799-b57c-0a346ec69767-kube-api-access-tfhzb\") pod \"certified-operators-nmrkd\" (UID: \"8631ec11-9ab2-4799-b57c-0a346ec69767\") " pod="openshift-marketplace/certified-operators-nmrkd" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.221307 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhdwv\" (UniqueName: \"kubernetes.io/projected/ad96bcad-395b-4844-9992-00acdf7436c2-kube-api-access-fhdwv\") pod \"community-operators-47m2l\" (UID: \"ad96bcad-395b-4844-9992-00acdf7436c2\") " pod="openshift-marketplace/community-operators-47m2l" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.235732 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4vzx6"] Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.240860 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.245661 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4vzx6" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.273373 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:48 crc kubenswrapper[4947]: E0125 00:11:48.273769 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:48.773743114 +0000 UTC m=+148.006733544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.274345 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fbe2fc7-f0a5-439c-988c-d034d3da6add-utilities\") pod \"community-operators-4vzx6\" (UID: \"4fbe2fc7-f0a5-439c-988c-d034d3da6add\") " pod="openshift-marketplace/community-operators-4vzx6" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.274479 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fbe2fc7-f0a5-439c-988c-d034d3da6add-catalog-content\") pod \"community-operators-4vzx6\" (UID: \"4fbe2fc7-f0a5-439c-988c-d034d3da6add\") " pod="openshift-marketplace/community-operators-4vzx6" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.274655 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.274740 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lszc2\" (UniqueName: \"kubernetes.io/projected/4fbe2fc7-f0a5-439c-988c-d034d3da6add-kube-api-access-lszc2\") pod \"community-operators-4vzx6\" (UID: \"4fbe2fc7-f0a5-439c-988c-d034d3da6add\") " pod="openshift-marketplace/community-operators-4vzx6" Jan 25 00:11:48 crc kubenswrapper[4947]: E0125 00:11:48.275302 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:48.775284614 +0000 UTC m=+148.008275054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.285643 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4vzx6"] Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.327458 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.403623 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-47m2l" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.405138 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.405492 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fbe2fc7-f0a5-439c-988c-d034d3da6add-catalog-content\") pod \"community-operators-4vzx6\" (UID: \"4fbe2fc7-f0a5-439c-988c-d034d3da6add\") " pod="openshift-marketplace/community-operators-4vzx6" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.405554 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lszc2\" (UniqueName: \"kubernetes.io/projected/4fbe2fc7-f0a5-439c-988c-d034d3da6add-kube-api-access-lszc2\") pod \"community-operators-4vzx6\" (UID: \"4fbe2fc7-f0a5-439c-988c-d034d3da6add\") " pod="openshift-marketplace/community-operators-4vzx6" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.405600 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fbe2fc7-f0a5-439c-988c-d034d3da6add-utilities\") pod \"community-operators-4vzx6\" (UID: \"4fbe2fc7-f0a5-439c-988c-d034d3da6add\") " pod="openshift-marketplace/community-operators-4vzx6" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.405946 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fbe2fc7-f0a5-439c-988c-d034d3da6add-utilities\") pod \"community-operators-4vzx6\" (UID: \"4fbe2fc7-f0a5-439c-988c-d034d3da6add\") " pod="openshift-marketplace/community-operators-4vzx6" Jan 25 00:11:48 crc kubenswrapper[4947]: E0125 00:11:48.406027 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:48.906012968 +0000 UTC m=+148.139003408 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.406259 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fbe2fc7-f0a5-439c-988c-d034d3da6add-catalog-content\") pod \"community-operators-4vzx6\" (UID: \"4fbe2fc7-f0a5-439c-988c-d034d3da6add\") " pod="openshift-marketplace/community-operators-4vzx6" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.412822 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qzj76" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.412982 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmrkd" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.435042 4947 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.457786 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lszc2\" (UniqueName: \"kubernetes.io/projected/4fbe2fc7-f0a5-439c-988c-d034d3da6add-kube-api-access-lszc2\") pod \"community-operators-4vzx6\" (UID: \"4fbe2fc7-f0a5-439c-988c-d034d3da6add\") " pod="openshift-marketplace/community-operators-4vzx6" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.507531 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:48 crc kubenswrapper[4947]: E0125 00:11:48.507831 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:49.007820803 +0000 UTC m=+148.240811243 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.512577 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x" event={"ID":"c4619229-a3d7-401d-92d8-b1195e6e08f8","Type":"ContainerDied","Data":"6f773596799deeb0c9b3c47a4dba49661e5faca82042d0d785f8b4d7e9bba2d2"} Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.512623 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f773596799deeb0c9b3c47a4dba49661e5faca82042d0d785f8b4d7e9bba2d2" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.512717 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.542748 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.543370 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.546734 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.547047 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.590929 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4vzx6" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.593298 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" event={"ID":"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f","Type":"ContainerStarted","Data":"25a5d9d4aab1073fe916b242776f45defb947f1afcfe4589e770972c6821c5e8"} Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.609105 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:48 crc kubenswrapper[4947]: E0125 00:11:48.609803 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:49.109784492 +0000 UTC m=+148.342774932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.695094 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.715349 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.715618 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67dda4f2-8e20-4173-8789-a53030fa141f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"67dda4f2-8e20-4173-8789-a53030fa141f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.715640 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67dda4f2-8e20-4173-8789-a53030fa141f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"67dda4f2-8e20-4173-8789-a53030fa141f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 25 00:11:48 crc kubenswrapper[4947]: E0125 00:11:48.717045 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:49.217029938 +0000 UTC m=+148.450020378 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.816245 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.816513 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67dda4f2-8e20-4173-8789-a53030fa141f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"67dda4f2-8e20-4173-8789-a53030fa141f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.816540 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67dda4f2-8e20-4173-8789-a53030fa141f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"67dda4f2-8e20-4173-8789-a53030fa141f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 25 00:11:48 crc kubenswrapper[4947]: E0125 00:11:48.816874 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:49.316859872 +0000 UTC m=+148.549850312 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.816903 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67dda4f2-8e20-4173-8789-a53030fa141f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"67dda4f2-8e20-4173-8789-a53030fa141f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.887642 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.899028 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67dda4f2-8e20-4173-8789-a53030fa141f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"67dda4f2-8e20-4173-8789-a53030fa141f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.914747 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" podStartSLOduration=12.914729283 podStartE2EDuration="12.914729283s" podCreationTimestamp="2026-01-25 00:11:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:48.750244451 +0000 UTC m=+147.983234901" watchObservedRunningTime="2026-01-25 00:11:48.914729283 +0000 UTC m=+148.147719723" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.918625 4947 patch_prober.go:28] interesting pod/router-default-5444994796-5nscb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 25 00:11:48 crc kubenswrapper[4947]: [-]has-synced failed: reason withheld Jan 25 00:11:48 crc kubenswrapper[4947]: [+]process-running ok Jan 25 00:11:48 crc kubenswrapper[4947]: healthz check failed Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.918671 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5nscb" podUID="2244349f-df5c-4813-a0e7-418a602f57b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.926419 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:48 crc kubenswrapper[4947]: E0125 00:11:48.926717 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:49.426706057 +0000 UTC m=+148.659696497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.935596 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.935847 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.936881 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.979908 4947 patch_prober.go:28] interesting pod/console-f9d7485db-95tmb container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.980484 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-95tmb" podUID="49c456f9-6cbf-4e3c-992a-8636357253ad" containerName="console" probeResult="failure" output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.032871 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:49 crc kubenswrapper[4947]: E0125 00:11:49.033683 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:49.533647556 +0000 UTC m=+148.766637996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.034197 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:49 crc kubenswrapper[4947]: E0125 00:11:49.034488 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:49.534478618 +0000 UTC m=+148.767469058 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.135237 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:49 crc kubenswrapper[4947]: E0125 00:11:49.135628 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:49.635613955 +0000 UTC m=+148.868604395 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.165859 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-xwjmr" Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.240452 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:49 crc kubenswrapper[4947]: E0125 00:11:49.240820 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:49.740806128 +0000 UTC m=+148.973796568 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.285391 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.286537 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.324384 4947 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-25T00:11:48.435062272Z","Handler":null,"Name":""} Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.342661 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:49 crc kubenswrapper[4947]: E0125 00:11:49.343205 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:49.843119076 +0000 UTC m=+149.076109516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.443995 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:49 crc kubenswrapper[4947]: E0125 00:11:49.446277 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:49.946266286 +0000 UTC m=+149.179256726 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.548688 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:49 crc kubenswrapper[4947]: E0125 00:11:49.548983 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:50.048968433 +0000 UTC m=+149.281958863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.601915 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.601945 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.616599 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"dbf1d6f17fcf81edfc092beda07ad7555a6b9c7867b661dbbb51c1bf80ff4748"} Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.628303 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"f2e8c896ba14b65bea1959fd101dfe34efe55d0fc583761612aee0d6fd62d57e"} Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.628350 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"0a61da00d3ed8c9ab02df71450413018e03b1686d4cb6c823f20ee2679791124"} Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.629160 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.637309 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.652391 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f3ad3dc1fc24d0db3232a872cd25d7ea5326010f4237f0c8c301b532155a9656"} Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.652437 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a71e0ad5256a9df5103cd71cfe9af5e1092f6997457b4bb8ceef0bcf2424bd32"} Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.653825 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:49 crc kubenswrapper[4947]: E0125 00:11:49.654149 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:50.154110876 +0000 UTC m=+149.387101316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.730137 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qzj76"] Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.741554 4947 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.741597 4947 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.756483 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.775507 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.788570 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-47m2l"] Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.826057 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wwwnp"] Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.827584 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wwwnp" Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.835177 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.837165 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wwwnp"] Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.859413 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.882759 4947 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.882796 4947 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.897522 4947 patch_prober.go:28] interesting pod/router-default-5444994796-5nscb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 25 00:11:49 crc kubenswrapper[4947]: [-]has-synced failed: reason withheld Jan 25 00:11:49 crc kubenswrapper[4947]: [+]process-running ok Jan 25 00:11:49 crc kubenswrapper[4947]: healthz check failed Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.897988 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5nscb" podUID="2244349f-df5c-4813-a0e7-418a602f57b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.959759 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nmrkd"] Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.960869 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06282146-8047-4104-b189-c896e5b7f8b9-catalog-content\") pod \"redhat-marketplace-wwwnp\" (UID: \"06282146-8047-4104-b189-c896e5b7f8b9\") " pod="openshift-marketplace/redhat-marketplace-wwwnp" Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.960903 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06282146-8047-4104-b189-c896e5b7f8b9-utilities\") pod \"redhat-marketplace-wwwnp\" (UID: \"06282146-8047-4104-b189-c896e5b7f8b9\") " pod="openshift-marketplace/redhat-marketplace-wwwnp" Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.960941 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8th5h\" (UniqueName: \"kubernetes.io/projected/06282146-8047-4104-b189-c896e5b7f8b9-kube-api-access-8th5h\") pod \"redhat-marketplace-wwwnp\" (UID: \"06282146-8047-4104-b189-c896e5b7f8b9\") " pod="openshift-marketplace/redhat-marketplace-wwwnp" Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.986478 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.064106 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06282146-8047-4104-b189-c896e5b7f8b9-utilities\") pod \"redhat-marketplace-wwwnp\" (UID: \"06282146-8047-4104-b189-c896e5b7f8b9\") " pod="openshift-marketplace/redhat-marketplace-wwwnp" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.064558 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8th5h\" (UniqueName: \"kubernetes.io/projected/06282146-8047-4104-b189-c896e5b7f8b9-kube-api-access-8th5h\") pod \"redhat-marketplace-wwwnp\" (UID: \"06282146-8047-4104-b189-c896e5b7f8b9\") " pod="openshift-marketplace/redhat-marketplace-wwwnp" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.064638 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06282146-8047-4104-b189-c896e5b7f8b9-catalog-content\") pod \"redhat-marketplace-wwwnp\" (UID: \"06282146-8047-4104-b189-c896e5b7f8b9\") " pod="openshift-marketplace/redhat-marketplace-wwwnp" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.064730 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06282146-8047-4104-b189-c896e5b7f8b9-utilities\") pod \"redhat-marketplace-wwwnp\" (UID: \"06282146-8047-4104-b189-c896e5b7f8b9\") " pod="openshift-marketplace/redhat-marketplace-wwwnp" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.064404 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4vzx6"] Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.065135 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06282146-8047-4104-b189-c896e5b7f8b9-catalog-content\") pod \"redhat-marketplace-wwwnp\" (UID: \"06282146-8047-4104-b189-c896e5b7f8b9\") " pod="openshift-marketplace/redhat-marketplace-wwwnp" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.127074 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.147114 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.159022 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8th5h\" (UniqueName: \"kubernetes.io/projected/06282146-8047-4104-b189-c896e5b7f8b9-kube-api-access-8th5h\") pod \"redhat-marketplace-wwwnp\" (UID: \"06282146-8047-4104-b189-c896e5b7f8b9\") " pod="openshift-marketplace/redhat-marketplace-wwwnp" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.166301 4947 patch_prober.go:28] interesting pod/downloads-7954f5f757-5zvdg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.166346 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5zvdg" podUID="0e97ae5e-35ab-41e9-aa03-ad060bbbd676" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.168717 4947 patch_prober.go:28] interesting pod/downloads-7954f5f757-5zvdg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.168764 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5zvdg" podUID="0e97ae5e-35ab-41e9-aa03-ad060bbbd676" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.227718 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2ckt7"] Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.228676 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2ckt7" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.247833 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ckt7"] Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.358422 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wwwnp" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.370049 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj6xh\" (UniqueName: \"kubernetes.io/projected/47cb5005-6286-4d5c-b654-65009ac6d3d9-kube-api-access-xj6xh\") pod \"redhat-marketplace-2ckt7\" (UID: \"47cb5005-6286-4d5c-b654-65009ac6d3d9\") " pod="openshift-marketplace/redhat-marketplace-2ckt7" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.370195 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47cb5005-6286-4d5c-b654-65009ac6d3d9-utilities\") pod \"redhat-marketplace-2ckt7\" (UID: \"47cb5005-6286-4d5c-b654-65009ac6d3d9\") " pod="openshift-marketplace/redhat-marketplace-2ckt7" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.370227 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47cb5005-6286-4d5c-b654-65009ac6d3d9-catalog-content\") pod \"redhat-marketplace-2ckt7\" (UID: \"47cb5005-6286-4d5c-b654-65009ac6d3d9\") " pod="openshift-marketplace/redhat-marketplace-2ckt7" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.471689 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj6xh\" (UniqueName: \"kubernetes.io/projected/47cb5005-6286-4d5c-b654-65009ac6d3d9-kube-api-access-xj6xh\") pod \"redhat-marketplace-2ckt7\" (UID: \"47cb5005-6286-4d5c-b654-65009ac6d3d9\") " pod="openshift-marketplace/redhat-marketplace-2ckt7" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.471730 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47cb5005-6286-4d5c-b654-65009ac6d3d9-utilities\") pod \"redhat-marketplace-2ckt7\" (UID: \"47cb5005-6286-4d5c-b654-65009ac6d3d9\") " pod="openshift-marketplace/redhat-marketplace-2ckt7" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.471747 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47cb5005-6286-4d5c-b654-65009ac6d3d9-catalog-content\") pod \"redhat-marketplace-2ckt7\" (UID: \"47cb5005-6286-4d5c-b654-65009ac6d3d9\") " pod="openshift-marketplace/redhat-marketplace-2ckt7" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.472398 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47cb5005-6286-4d5c-b654-65009ac6d3d9-catalog-content\") pod \"redhat-marketplace-2ckt7\" (UID: \"47cb5005-6286-4d5c-b654-65009ac6d3d9\") " pod="openshift-marketplace/redhat-marketplace-2ckt7" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.472473 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47cb5005-6286-4d5c-b654-65009ac6d3d9-utilities\") pod \"redhat-marketplace-2ckt7\" (UID: \"47cb5005-6286-4d5c-b654-65009ac6d3d9\") " pod="openshift-marketplace/redhat-marketplace-2ckt7" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.530194 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj6xh\" (UniqueName: \"kubernetes.io/projected/47cb5005-6286-4d5c-b654-65009ac6d3d9-kube-api-access-xj6xh\") pod \"redhat-marketplace-2ckt7\" (UID: \"47cb5005-6286-4d5c-b654-65009ac6d3d9\") " pod="openshift-marketplace/redhat-marketplace-2ckt7" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.634244 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2ckt7" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.675493 4947 generic.go:334] "Generic (PLEG): container finished" podID="ad96bcad-395b-4844-9992-00acdf7436c2" containerID="95a8066ea5df0bbdd562663b51cd869b66acf5c56464c5567e5718c195b7c643" exitCode=0 Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.675830 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47m2l" event={"ID":"ad96bcad-395b-4844-9992-00acdf7436c2","Type":"ContainerDied","Data":"95a8066ea5df0bbdd562663b51cd869b66acf5c56464c5567e5718c195b7c643"} Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.675855 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47m2l" event={"ID":"ad96bcad-395b-4844-9992-00acdf7436c2","Type":"ContainerStarted","Data":"6cbc84951af1c9fb04adcfedc17cf7a2205629dcc8722ddaa8c1026d70782225"} Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.678392 4947 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.692461 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"f9b357dd8b8eb9fb06c3bf662c2fa7a42707d4d1f8cf3a4901b380828df7cb04"} Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.715412 4947 generic.go:334] "Generic (PLEG): container finished" podID="8631ec11-9ab2-4799-b57c-0a346ec69767" containerID="ffcce6eeb0e69e798c8e9df8d5c6434f6e64194dfff77e93d2a985b4528474b0" exitCode=0 Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.715561 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmrkd" event={"ID":"8631ec11-9ab2-4799-b57c-0a346ec69767","Type":"ContainerDied","Data":"ffcce6eeb0e69e798c8e9df8d5c6434f6e64194dfff77e93d2a985b4528474b0"} Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.715622 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmrkd" event={"ID":"8631ec11-9ab2-4799-b57c-0a346ec69767","Type":"ContainerStarted","Data":"7d6ebf3601605e6c873a327cc838407e459ee58147699177b0740d46b1d7aedf"} Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.721538 4947 generic.go:334] "Generic (PLEG): container finished" podID="4fbe2fc7-f0a5-439c-988c-d034d3da6add" containerID="35a44afdfef807235aaca7df67ffffbacabda8facded58f94de250d11a4d2a25" exitCode=0 Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.721611 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4vzx6" event={"ID":"4fbe2fc7-f0a5-439c-988c-d034d3da6add","Type":"ContainerDied","Data":"35a44afdfef807235aaca7df67ffffbacabda8facded58f94de250d11a4d2a25"} Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.721639 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4vzx6" event={"ID":"4fbe2fc7-f0a5-439c-988c-d034d3da6add","Type":"ContainerStarted","Data":"e4fc08944b569f65f472ef5d6a0000744c15a40d1962fcdb333c93ea9560dbba"} Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.738852 4947 generic.go:334] "Generic (PLEG): container finished" podID="900aeb01-050c-45b8-936c-e5f8d73ebeb5" containerID="3d383bbb6e2f77eb14a9b8a1dbd79f1c501ad3a1a7e6ec1145d397b4e8d1deb0" exitCode=0 Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.738956 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qzj76" event={"ID":"900aeb01-050c-45b8-936c-e5f8d73ebeb5","Type":"ContainerDied","Data":"3d383bbb6e2f77eb14a9b8a1dbd79f1c501ad3a1a7e6ec1145d397b4e8d1deb0"} Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.738989 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qzj76" event={"ID":"900aeb01-050c-45b8-936c-e5f8d73ebeb5","Type":"ContainerStarted","Data":"596449ceb20f31ed206815663af8903fec2583551204ec75b85d39be48c2895f"} Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.744784 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"67dda4f2-8e20-4173-8789-a53030fa141f","Type":"ContainerStarted","Data":"464af017ba47fe42ba33892a92a7d488e926c49ee9650ccb5665a6435ad230c8"} Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.754086 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.794375 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.807709 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ltw77"] Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.808692 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ltw77" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.823529 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.825897 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ltw77"] Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.889828 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49263faf-29f4-481c-aafd-a271a29c209a-utilities\") pod \"redhat-operators-ltw77\" (UID: \"49263faf-29f4-481c-aafd-a271a29c209a\") " pod="openshift-marketplace/redhat-operators-ltw77" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.889898 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49263faf-29f4-481c-aafd-a271a29c209a-catalog-content\") pod \"redhat-operators-ltw77\" (UID: \"49263faf-29f4-481c-aafd-a271a29c209a\") " pod="openshift-marketplace/redhat-operators-ltw77" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.889962 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ndnd\" (UniqueName: \"kubernetes.io/projected/49263faf-29f4-481c-aafd-a271a29c209a-kube-api-access-7ndnd\") pod \"redhat-operators-ltw77\" (UID: \"49263faf-29f4-481c-aafd-a271a29c209a\") " pod="openshift-marketplace/redhat-operators-ltw77" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.904320 4947 patch_prober.go:28] interesting pod/router-default-5444994796-5nscb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 25 00:11:50 crc kubenswrapper[4947]: [-]has-synced failed: reason withheld Jan 25 00:11:50 crc kubenswrapper[4947]: [+]process-running ok Jan 25 00:11:50 crc kubenswrapper[4947]: healthz check failed Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.904370 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5nscb" podUID="2244349f-df5c-4813-a0e7-418a602f57b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.991584 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ndnd\" (UniqueName: \"kubernetes.io/projected/49263faf-29f4-481c-aafd-a271a29c209a-kube-api-access-7ndnd\") pod \"redhat-operators-ltw77\" (UID: \"49263faf-29f4-481c-aafd-a271a29c209a\") " pod="openshift-marketplace/redhat-operators-ltw77" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.991696 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49263faf-29f4-481c-aafd-a271a29c209a-utilities\") pod \"redhat-operators-ltw77\" (UID: \"49263faf-29f4-481c-aafd-a271a29c209a\") " pod="openshift-marketplace/redhat-operators-ltw77" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.991724 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49263faf-29f4-481c-aafd-a271a29c209a-catalog-content\") pod \"redhat-operators-ltw77\" (UID: \"49263faf-29f4-481c-aafd-a271a29c209a\") " pod="openshift-marketplace/redhat-operators-ltw77" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.992158 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49263faf-29f4-481c-aafd-a271a29c209a-catalog-content\") pod \"redhat-operators-ltw77\" (UID: \"49263faf-29f4-481c-aafd-a271a29c209a\") " pod="openshift-marketplace/redhat-operators-ltw77" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.992588 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49263faf-29f4-481c-aafd-a271a29c209a-utilities\") pod \"redhat-operators-ltw77\" (UID: \"49263faf-29f4-481c-aafd-a271a29c209a\") " pod="openshift-marketplace/redhat-operators-ltw77" Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.012532 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mprs4"] Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.013872 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ndnd\" (UniqueName: \"kubernetes.io/projected/49263faf-29f4-481c-aafd-a271a29c209a-kube-api-access-7ndnd\") pod \"redhat-operators-ltw77\" (UID: \"49263faf-29f4-481c-aafd-a271a29c209a\") " pod="openshift-marketplace/redhat-operators-ltw77" Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.024806 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4w46p"] Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.025842 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4w46p" Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.033694 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wwwnp"] Jan 25 00:11:51 crc kubenswrapper[4947]: W0125 00:11:51.042931 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06282146_8047_4104_b189_c896e5b7f8b9.slice/crio-b331dec527a60132595158dee76520a26cd144ddc3aa45e156eaf1db6341fcb3 WatchSource:0}: Error finding container b331dec527a60132595158dee76520a26cd144ddc3aa45e156eaf1db6341fcb3: Status 404 returned error can't find the container with id b331dec527a60132595158dee76520a26cd144ddc3aa45e156eaf1db6341fcb3 Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.069370 4947 patch_prober.go:28] interesting pod/apiserver-76f77b778f-7kcc9 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 25 00:11:51 crc kubenswrapper[4947]: [+]log ok Jan 25 00:11:51 crc kubenswrapper[4947]: [+]etcd ok Jan 25 00:11:51 crc kubenswrapper[4947]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 25 00:11:51 crc kubenswrapper[4947]: [+]poststarthook/generic-apiserver-start-informers ok Jan 25 00:11:51 crc kubenswrapper[4947]: [+]poststarthook/max-in-flight-filter ok Jan 25 00:11:51 crc kubenswrapper[4947]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 25 00:11:51 crc kubenswrapper[4947]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 25 00:11:51 crc kubenswrapper[4947]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 25 00:11:51 crc kubenswrapper[4947]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 25 00:11:51 crc kubenswrapper[4947]: [+]poststarthook/project.openshift.io-projectcache ok Jan 25 00:11:51 crc kubenswrapper[4947]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 25 00:11:51 crc kubenswrapper[4947]: [+]poststarthook/openshift.io-startinformers ok Jan 25 00:11:51 crc kubenswrapper[4947]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 25 00:11:51 crc kubenswrapper[4947]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 25 00:11:51 crc kubenswrapper[4947]: livez check failed Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.069793 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" podUID="4e8662e0-1de8-4371-8836-214a0394675c" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.095694 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57fceeaa-414d-4570-98fb-2b8a06a7d3bb-catalog-content\") pod \"redhat-operators-4w46p\" (UID: \"57fceeaa-414d-4570-98fb-2b8a06a7d3bb\") " pod="openshift-marketplace/redhat-operators-4w46p" Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.095752 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fvpv\" (UniqueName: \"kubernetes.io/projected/57fceeaa-414d-4570-98fb-2b8a06a7d3bb-kube-api-access-9fvpv\") pod \"redhat-operators-4w46p\" (UID: \"57fceeaa-414d-4570-98fb-2b8a06a7d3bb\") " pod="openshift-marketplace/redhat-operators-4w46p" Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.095823 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57fceeaa-414d-4570-98fb-2b8a06a7d3bb-utilities\") pod \"redhat-operators-4w46p\" (UID: \"57fceeaa-414d-4570-98fb-2b8a06a7d3bb\") " pod="openshift-marketplace/redhat-operators-4w46p" Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.119835 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.120553 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4w46p"] Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.144494 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ltw77" Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.218912 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57fceeaa-414d-4570-98fb-2b8a06a7d3bb-utilities\") pod \"redhat-operators-4w46p\" (UID: \"57fceeaa-414d-4570-98fb-2b8a06a7d3bb\") " pod="openshift-marketplace/redhat-operators-4w46p" Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.218961 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57fceeaa-414d-4570-98fb-2b8a06a7d3bb-catalog-content\") pod \"redhat-operators-4w46p\" (UID: \"57fceeaa-414d-4570-98fb-2b8a06a7d3bb\") " pod="openshift-marketplace/redhat-operators-4w46p" Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.218994 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fvpv\" (UniqueName: \"kubernetes.io/projected/57fceeaa-414d-4570-98fb-2b8a06a7d3bb-kube-api-access-9fvpv\") pod \"redhat-operators-4w46p\" (UID: \"57fceeaa-414d-4570-98fb-2b8a06a7d3bb\") " pod="openshift-marketplace/redhat-operators-4w46p" Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.219734 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57fceeaa-414d-4570-98fb-2b8a06a7d3bb-utilities\") pod \"redhat-operators-4w46p\" (UID: \"57fceeaa-414d-4570-98fb-2b8a06a7d3bb\") " pod="openshift-marketplace/redhat-operators-4w46p" Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.219739 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57fceeaa-414d-4570-98fb-2b8a06a7d3bb-catalog-content\") pod \"redhat-operators-4w46p\" (UID: \"57fceeaa-414d-4570-98fb-2b8a06a7d3bb\") " pod="openshift-marketplace/redhat-operators-4w46p" Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.256018 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fvpv\" (UniqueName: \"kubernetes.io/projected/57fceeaa-414d-4570-98fb-2b8a06a7d3bb-kube-api-access-9fvpv\") pod \"redhat-operators-4w46p\" (UID: \"57fceeaa-414d-4570-98fb-2b8a06a7d3bb\") " pod="openshift-marketplace/redhat-operators-4w46p" Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.390402 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4w46p" Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.472583 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ckt7"] Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.781984 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"67dda4f2-8e20-4173-8789-a53030fa141f","Type":"ContainerStarted","Data":"3e65d42033471ac4f20585645dea9aea697ba13cd4aaebab2dce1b8c26f58aa3"} Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.788871 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ckt7" event={"ID":"47cb5005-6286-4d5c-b654-65009ac6d3d9","Type":"ContainerStarted","Data":"401542bfadee8c47bb521dc0eb21357c2ec3d46cc246c1c4b0d9b2d89d6fbbe2"} Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.823516 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ltw77"] Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.829663 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.829644248 podStartE2EDuration="3.829644248s" podCreationTimestamp="2026-01-25 00:11:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:51.828628402 +0000 UTC m=+151.061618842" watchObservedRunningTime="2026-01-25 00:11:51.829644248 +0000 UTC m=+151.062634688" Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.834260 4947 generic.go:334] "Generic (PLEG): container finished" podID="06282146-8047-4104-b189-c896e5b7f8b9" containerID="b68921fda52bf0941dd3182cdd91346027905a40aad8abb0a9e34eef1346172c" exitCode=0 Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.834359 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wwwnp" event={"ID":"06282146-8047-4104-b189-c896e5b7f8b9","Type":"ContainerDied","Data":"b68921fda52bf0941dd3182cdd91346027905a40aad8abb0a9e34eef1346172c"} Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.834400 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wwwnp" event={"ID":"06282146-8047-4104-b189-c896e5b7f8b9","Type":"ContainerStarted","Data":"b331dec527a60132595158dee76520a26cd144ddc3aa45e156eaf1db6341fcb3"} Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.845327 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" event={"ID":"ce1b6238-9a41-4472-accc-e4d7d6371357","Type":"ContainerStarted","Data":"51f2c364bfae060665da042ae7dc21f336f532bdfa00072378e4e19dbb303585"} Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.845372 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.845383 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" event={"ID":"ce1b6238-9a41-4472-accc-e4d7d6371357","Type":"ContainerStarted","Data":"8aa2ec1702299cb0f2f7ebe9da84ffc79ac7ec1919bcb49ddb3c081345236f17"} Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.883905 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" podStartSLOduration=130.883886433 podStartE2EDuration="2m10.883886433s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:51.883706228 +0000 UTC m=+151.116696668" watchObservedRunningTime="2026-01-25 00:11:51.883886433 +0000 UTC m=+151.116876873" Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.893653 4947 patch_prober.go:28] interesting pod/router-default-5444994796-5nscb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 25 00:11:51 crc kubenswrapper[4947]: [-]has-synced failed: reason withheld Jan 25 00:11:51 crc kubenswrapper[4947]: [+]process-running ok Jan 25 00:11:51 crc kubenswrapper[4947]: healthz check failed Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.893726 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5nscb" podUID="2244349f-df5c-4813-a0e7-418a602f57b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 00:11:52 crc kubenswrapper[4947]: I0125 00:11:52.074746 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4w46p"] Jan 25 00:11:52 crc kubenswrapper[4947]: W0125 00:11:52.087324 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57fceeaa_414d_4570_98fb_2b8a06a7d3bb.slice/crio-6ff6fe5b753697e94b7c8e5ddcb3516739a88d3938d849311b89961974eb03c2 WatchSource:0}: Error finding container 6ff6fe5b753697e94b7c8e5ddcb3516739a88d3938d849311b89961974eb03c2: Status 404 returned error can't find the container with id 6ff6fe5b753697e94b7c8e5ddcb3516739a88d3938d849311b89961974eb03c2 Jan 25 00:11:52 crc kubenswrapper[4947]: E0125 00:11:52.327386 4947 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49263faf_29f4_481c_aafd_a271a29c209a.slice/crio-conmon-e3fbbc635210edef169ebdb44d20ee2090763de1a269a7c6409c3d6bf1dfb116.scope\": RecentStats: unable to find data in memory cache]" Jan 25 00:11:52 crc kubenswrapper[4947]: I0125 00:11:52.899858 4947 patch_prober.go:28] interesting pod/router-default-5444994796-5nscb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 25 00:11:52 crc kubenswrapper[4947]: [-]has-synced failed: reason withheld Jan 25 00:11:52 crc kubenswrapper[4947]: [+]process-running ok Jan 25 00:11:52 crc kubenswrapper[4947]: healthz check failed Jan 25 00:11:52 crc kubenswrapper[4947]: I0125 00:11:52.899912 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5nscb" podUID="2244349f-df5c-4813-a0e7-418a602f57b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 00:11:52 crc kubenswrapper[4947]: I0125 00:11:52.928149 4947 generic.go:334] "Generic (PLEG): container finished" podID="67dda4f2-8e20-4173-8789-a53030fa141f" containerID="3e65d42033471ac4f20585645dea9aea697ba13cd4aaebab2dce1b8c26f58aa3" exitCode=0 Jan 25 00:11:52 crc kubenswrapper[4947]: I0125 00:11:52.928252 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"67dda4f2-8e20-4173-8789-a53030fa141f","Type":"ContainerDied","Data":"3e65d42033471ac4f20585645dea9aea697ba13cd4aaebab2dce1b8c26f58aa3"} Jan 25 00:11:52 crc kubenswrapper[4947]: I0125 00:11:52.930683 4947 generic.go:334] "Generic (PLEG): container finished" podID="49263faf-29f4-481c-aafd-a271a29c209a" containerID="e3fbbc635210edef169ebdb44d20ee2090763de1a269a7c6409c3d6bf1dfb116" exitCode=0 Jan 25 00:11:52 crc kubenswrapper[4947]: I0125 00:11:52.930751 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltw77" event={"ID":"49263faf-29f4-481c-aafd-a271a29c209a","Type":"ContainerDied","Data":"e3fbbc635210edef169ebdb44d20ee2090763de1a269a7c6409c3d6bf1dfb116"} Jan 25 00:11:52 crc kubenswrapper[4947]: I0125 00:11:52.930789 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltw77" event={"ID":"49263faf-29f4-481c-aafd-a271a29c209a","Type":"ContainerStarted","Data":"23655937ab043534ca01347d9a2964b60c41f2a6eae0705e6c094b13084701de"} Jan 25 00:11:52 crc kubenswrapper[4947]: I0125 00:11:52.968239 4947 generic.go:334] "Generic (PLEG): container finished" podID="47cb5005-6286-4d5c-b654-65009ac6d3d9" containerID="2f94cab6a2a710126f5b6870bb3f746028f1b033d9975bad0560d251170fc46f" exitCode=0 Jan 25 00:11:52 crc kubenswrapper[4947]: I0125 00:11:52.968329 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ckt7" event={"ID":"47cb5005-6286-4d5c-b654-65009ac6d3d9","Type":"ContainerDied","Data":"2f94cab6a2a710126f5b6870bb3f746028f1b033d9975bad0560d251170fc46f"} Jan 25 00:11:52 crc kubenswrapper[4947]: I0125 00:11:52.971915 4947 generic.go:334] "Generic (PLEG): container finished" podID="57fceeaa-414d-4570-98fb-2b8a06a7d3bb" containerID="5b659f6a3287d4d66a77d57b5ec03b9728a52bc6ef42a979eeaaf06156f4c4a0" exitCode=0 Jan 25 00:11:52 crc kubenswrapper[4947]: I0125 00:11:52.972040 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4w46p" event={"ID":"57fceeaa-414d-4570-98fb-2b8a06a7d3bb","Type":"ContainerDied","Data":"5b659f6a3287d4d66a77d57b5ec03b9728a52bc6ef42a979eeaaf06156f4c4a0"} Jan 25 00:11:52 crc kubenswrapper[4947]: I0125 00:11:52.972087 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4w46p" event={"ID":"57fceeaa-414d-4570-98fb-2b8a06a7d3bb","Type":"ContainerStarted","Data":"6ff6fe5b753697e94b7c8e5ddcb3516739a88d3938d849311b89961974eb03c2"} Jan 25 00:11:53 crc kubenswrapper[4947]: I0125 00:11:53.890227 4947 patch_prober.go:28] interesting pod/router-default-5444994796-5nscb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 25 00:11:53 crc kubenswrapper[4947]: [-]has-synced failed: reason withheld Jan 25 00:11:53 crc kubenswrapper[4947]: [+]process-running ok Jan 25 00:11:53 crc kubenswrapper[4947]: healthz check failed Jan 25 00:11:53 crc kubenswrapper[4947]: I0125 00:11:53.890610 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5nscb" podUID="2244349f-df5c-4813-a0e7-418a602f57b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.288534 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.293553 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.487462 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.563552 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67dda4f2-8e20-4173-8789-a53030fa141f-kubelet-dir\") pod \"67dda4f2-8e20-4173-8789-a53030fa141f\" (UID: \"67dda4f2-8e20-4173-8789-a53030fa141f\") " Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.563635 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67dda4f2-8e20-4173-8789-a53030fa141f-kube-api-access\") pod \"67dda4f2-8e20-4173-8789-a53030fa141f\" (UID: \"67dda4f2-8e20-4173-8789-a53030fa141f\") " Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.563679 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67dda4f2-8e20-4173-8789-a53030fa141f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "67dda4f2-8e20-4173-8789-a53030fa141f" (UID: "67dda4f2-8e20-4173-8789-a53030fa141f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.563933 4947 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67dda4f2-8e20-4173-8789-a53030fa141f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.588413 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67dda4f2-8e20-4173-8789-a53030fa141f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "67dda4f2-8e20-4173-8789-a53030fa141f" (UID: "67dda4f2-8e20-4173-8789-a53030fa141f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.665291 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67dda4f2-8e20-4173-8789-a53030fa141f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.851654 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 25 00:11:54 crc kubenswrapper[4947]: E0125 00:11:54.851931 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67dda4f2-8e20-4173-8789-a53030fa141f" containerName="pruner" Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.851945 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="67dda4f2-8e20-4173-8789-a53030fa141f" containerName="pruner" Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.852059 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="67dda4f2-8e20-4173-8789-a53030fa141f" containerName="pruner" Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.852524 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.858663 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.859272 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.876939 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.892362 4947 patch_prober.go:28] interesting pod/router-default-5444994796-5nscb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 25 00:11:54 crc kubenswrapper[4947]: [-]has-synced failed: reason withheld Jan 25 00:11:54 crc kubenswrapper[4947]: [+]process-running ok Jan 25 00:11:54 crc kubenswrapper[4947]: healthz check failed Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.892417 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5nscb" podUID="2244349f-df5c-4813-a0e7-418a602f57b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.968489 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4343388d-56d1-4c5b-a26d-f6e582b7818e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4343388d-56d1-4c5b-a26d-f6e582b7818e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.968584 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4343388d-56d1-4c5b-a26d-f6e582b7818e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4343388d-56d1-4c5b-a26d-f6e582b7818e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.998399 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"67dda4f2-8e20-4173-8789-a53030fa141f","Type":"ContainerDied","Data":"464af017ba47fe42ba33892a92a7d488e926c49ee9650ccb5665a6435ad230c8"} Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.998437 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="464af017ba47fe42ba33892a92a7d488e926c49ee9650ccb5665a6435ad230c8" Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.998455 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 25 00:11:55 crc kubenswrapper[4947]: I0125 00:11:55.084712 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4343388d-56d1-4c5b-a26d-f6e582b7818e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4343388d-56d1-4c5b-a26d-f6e582b7818e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 25 00:11:55 crc kubenswrapper[4947]: I0125 00:11:55.084830 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4343388d-56d1-4c5b-a26d-f6e582b7818e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4343388d-56d1-4c5b-a26d-f6e582b7818e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 25 00:11:55 crc kubenswrapper[4947]: I0125 00:11:55.086419 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4343388d-56d1-4c5b-a26d-f6e582b7818e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4343388d-56d1-4c5b-a26d-f6e582b7818e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 25 00:11:55 crc kubenswrapper[4947]: I0125 00:11:55.134667 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4343388d-56d1-4c5b-a26d-f6e582b7818e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4343388d-56d1-4c5b-a26d-f6e582b7818e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 25 00:11:55 crc kubenswrapper[4947]: I0125 00:11:55.171601 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 25 00:11:55 crc kubenswrapper[4947]: I0125 00:11:55.566094 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 25 00:11:55 crc kubenswrapper[4947]: I0125 00:11:55.606158 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-5s2mh" Jan 25 00:11:56 crc kubenswrapper[4947]: I0125 00:11:56.000279 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:56 crc kubenswrapper[4947]: I0125 00:11:56.007933 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:56 crc kubenswrapper[4947]: I0125 00:11:56.101585 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4343388d-56d1-4c5b-a26d-f6e582b7818e","Type":"ContainerStarted","Data":"7476936a1d8d55b81decfdc5bbb098f01c38f5aa4d9e5a87a5707598738da175"} Jan 25 00:11:57 crc kubenswrapper[4947]: I0125 00:11:57.143350 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4343388d-56d1-4c5b-a26d-f6e582b7818e","Type":"ContainerStarted","Data":"dc40331535091335870ab7cddd137b456fda7314c073ab1724841444245820ce"} Jan 25 00:11:57 crc kubenswrapper[4947]: I0125 00:11:57.145578 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.145567629 podStartE2EDuration="3.145567629s" podCreationTimestamp="2026-01-25 00:11:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:57.140552957 +0000 UTC m=+156.373543407" watchObservedRunningTime="2026-01-25 00:11:57.145567629 +0000 UTC m=+156.378558069" Jan 25 00:11:58 crc kubenswrapper[4947]: I0125 00:11:58.163850 4947 generic.go:334] "Generic (PLEG): container finished" podID="4343388d-56d1-4c5b-a26d-f6e582b7818e" containerID="dc40331535091335870ab7cddd137b456fda7314c073ab1724841444245820ce" exitCode=0 Jan 25 00:11:58 crc kubenswrapper[4947]: I0125 00:11:58.163893 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4343388d-56d1-4c5b-a26d-f6e582b7818e","Type":"ContainerDied","Data":"dc40331535091335870ab7cddd137b456fda7314c073ab1724841444245820ce"} Jan 25 00:11:58 crc kubenswrapper[4947]: I0125 00:11:58.935862 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:58 crc kubenswrapper[4947]: I0125 00:11:58.941251 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:12:00 crc kubenswrapper[4947]: I0125 00:12:00.165577 4947 patch_prober.go:28] interesting pod/downloads-7954f5f757-5zvdg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Jan 25 00:12:00 crc kubenswrapper[4947]: I0125 00:12:00.166218 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5zvdg" podUID="0e97ae5e-35ab-41e9-aa03-ad060bbbd676" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Jan 25 00:12:00 crc kubenswrapper[4947]: I0125 00:12:00.169374 4947 patch_prober.go:28] interesting pod/downloads-7954f5f757-5zvdg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Jan 25 00:12:00 crc kubenswrapper[4947]: I0125 00:12:00.169454 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5zvdg" podUID="0e97ae5e-35ab-41e9-aa03-ad060bbbd676" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Jan 25 00:12:04 crc kubenswrapper[4947]: I0125 00:12:04.810337 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs\") pod \"network-metrics-daemon-hj7kb\" (UID: \"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\") " pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:12:04 crc kubenswrapper[4947]: I0125 00:12:04.816603 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs\") pod \"network-metrics-daemon-hj7kb\" (UID: \"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\") " pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:12:04 crc kubenswrapper[4947]: I0125 00:12:04.912295 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:12:05 crc kubenswrapper[4947]: I0125 00:12:05.617471 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5nql4"] Jan 25 00:12:05 crc kubenswrapper[4947]: I0125 00:12:05.617711 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" podUID="9d15a018-8297-4e45-8a62-afa89a267381" containerName="controller-manager" containerID="cri-o://b8157486549c7153c6f440e775308e1931565402eac4d25b57d51cfdfab72be2" gracePeriod=30 Jan 25 00:12:05 crc kubenswrapper[4947]: I0125 00:12:05.630724 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z"] Jan 25 00:12:05 crc kubenswrapper[4947]: I0125 00:12:05.631016 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" podUID="37b7a00e-4def-4d1e-8333-94d15174223b" containerName="route-controller-manager" containerID="cri-o://7403745f150d63c1345cf7e91fe3a3905bd0f9a92d392d4fa7f206d8e146d177" gracePeriod=30 Jan 25 00:12:06 crc kubenswrapper[4947]: I0125 00:12:06.264008 4947 generic.go:334] "Generic (PLEG): container finished" podID="37b7a00e-4def-4d1e-8333-94d15174223b" containerID="7403745f150d63c1345cf7e91fe3a3905bd0f9a92d392d4fa7f206d8e146d177" exitCode=0 Jan 25 00:12:06 crc kubenswrapper[4947]: I0125 00:12:06.264139 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" event={"ID":"37b7a00e-4def-4d1e-8333-94d15174223b","Type":"ContainerDied","Data":"7403745f150d63c1345cf7e91fe3a3905bd0f9a92d392d4fa7f206d8e146d177"} Jan 25 00:12:06 crc kubenswrapper[4947]: I0125 00:12:06.267373 4947 generic.go:334] "Generic (PLEG): container finished" podID="9d15a018-8297-4e45-8a62-afa89a267381" containerID="b8157486549c7153c6f440e775308e1931565402eac4d25b57d51cfdfab72be2" exitCode=0 Jan 25 00:12:06 crc kubenswrapper[4947]: I0125 00:12:06.267414 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" event={"ID":"9d15a018-8297-4e45-8a62-afa89a267381","Type":"ContainerDied","Data":"b8157486549c7153c6f440e775308e1931565402eac4d25b57d51cfdfab72be2"} Jan 25 00:12:07 crc kubenswrapper[4947]: I0125 00:12:07.827703 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 25 00:12:07 crc kubenswrapper[4947]: I0125 00:12:07.853310 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4343388d-56d1-4c5b-a26d-f6e582b7818e-kube-api-access\") pod \"4343388d-56d1-4c5b-a26d-f6e582b7818e\" (UID: \"4343388d-56d1-4c5b-a26d-f6e582b7818e\") " Jan 25 00:12:07 crc kubenswrapper[4947]: I0125 00:12:07.853402 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4343388d-56d1-4c5b-a26d-f6e582b7818e-kubelet-dir\") pod \"4343388d-56d1-4c5b-a26d-f6e582b7818e\" (UID: \"4343388d-56d1-4c5b-a26d-f6e582b7818e\") " Jan 25 00:12:07 crc kubenswrapper[4947]: I0125 00:12:07.853556 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4343388d-56d1-4c5b-a26d-f6e582b7818e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4343388d-56d1-4c5b-a26d-f6e582b7818e" (UID: "4343388d-56d1-4c5b-a26d-f6e582b7818e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:12:07 crc kubenswrapper[4947]: I0125 00:12:07.853842 4947 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4343388d-56d1-4c5b-a26d-f6e582b7818e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 25 00:12:07 crc kubenswrapper[4947]: I0125 00:12:07.859301 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4343388d-56d1-4c5b-a26d-f6e582b7818e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4343388d-56d1-4c5b-a26d-f6e582b7818e" (UID: "4343388d-56d1-4c5b-a26d-f6e582b7818e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:12:07 crc kubenswrapper[4947]: I0125 00:12:07.954535 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4343388d-56d1-4c5b-a26d-f6e582b7818e-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 25 00:12:08 crc kubenswrapper[4947]: I0125 00:12:08.284708 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4343388d-56d1-4c5b-a26d-f6e582b7818e","Type":"ContainerDied","Data":"7476936a1d8d55b81decfdc5bbb098f01c38f5aa4d9e5a87a5707598738da175"} Jan 25 00:12:08 crc kubenswrapper[4947]: I0125 00:12:08.284763 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7476936a1d8d55b81decfdc5bbb098f01c38f5aa4d9e5a87a5707598738da175" Jan 25 00:12:08 crc kubenswrapper[4947]: I0125 00:12:08.284871 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 25 00:12:09 crc kubenswrapper[4947]: I0125 00:12:09.926598 4947 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-vw66z container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 25 00:12:09 crc kubenswrapper[4947]: I0125 00:12:09.927057 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" podUID="37b7a00e-4def-4d1e-8333-94d15174223b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 25 00:12:10 crc kubenswrapper[4947]: I0125 00:12:10.133090 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:12:10 crc kubenswrapper[4947]: I0125 00:12:10.180672 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-5zvdg" Jan 25 00:12:10 crc kubenswrapper[4947]: I0125 00:12:10.276295 4947 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-5nql4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 25 00:12:10 crc kubenswrapper[4947]: I0125 00:12:10.276377 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" podUID="9d15a018-8297-4e45-8a62-afa89a267381" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 25 00:12:17 crc kubenswrapper[4947]: I0125 00:12:17.073038 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:12:17 crc kubenswrapper[4947]: I0125 00:12:17.073523 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:12:19 crc kubenswrapper[4947]: I0125 00:12:19.650463 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbnnc" Jan 25 00:12:19 crc kubenswrapper[4947]: I0125 00:12:19.927633 4947 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-vw66z container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 25 00:12:19 crc kubenswrapper[4947]: I0125 00:12:19.928237 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" podUID="37b7a00e-4def-4d1e-8333-94d15174223b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 25 00:12:20 crc kubenswrapper[4947]: I0125 00:12:20.277711 4947 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-5nql4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 25 00:12:20 crc kubenswrapper[4947]: I0125 00:12:20.277786 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" podUID="9d15a018-8297-4e45-8a62-afa89a267381" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 25 00:12:24 crc kubenswrapper[4947]: I0125 00:12:24.387253 4947 generic.go:334] "Generic (PLEG): container finished" podID="9e4f53a6-fcc3-4310-965d-9a5dda91080b" containerID="e3aa396695721797a7d088335e8a4442951eaec070e1d3920c891d8532000ff2" exitCode=0 Jan 25 00:12:24 crc kubenswrapper[4947]: I0125 00:12:24.387453 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29488320-jf979" event={"ID":"9e4f53a6-fcc3-4310-965d-9a5dda91080b","Type":"ContainerDied","Data":"e3aa396695721797a7d088335e8a4442951eaec070e1d3920c891d8532000ff2"} Jan 25 00:12:28 crc kubenswrapper[4947]: I0125 00:12:28.248429 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:12:28 crc kubenswrapper[4947]: E0125 00:12:28.847875 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 25 00:12:28 crc kubenswrapper[4947]: E0125 00:12:28.848062 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxzk6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-qzj76_openshift-marketplace(900aeb01-050c-45b8-936c-e5f8d73ebeb5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 25 00:12:28 crc kubenswrapper[4947]: E0125 00:12:28.849311 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-qzj76" podUID="900aeb01-050c-45b8-936c-e5f8d73ebeb5" Jan 25 00:12:29 crc kubenswrapper[4947]: E0125 00:12:29.883450 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qzj76" podUID="900aeb01-050c-45b8-936c-e5f8d73ebeb5" Jan 25 00:12:29 crc kubenswrapper[4947]: E0125 00:12:29.900987 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:eb2f8e43e60769562d45eeebcd027f8298b39dfa92aed456412d36a28c32bc92: Get \"https://registry.redhat.io/v2/redhat/redhat-operator-index/blobs/sha256:eb2f8e43e60769562d45eeebcd027f8298b39dfa92aed456412d36a28c32bc92\": context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 25 00:12:29 crc kubenswrapper[4947]: E0125 00:12:29.901311 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9fvpv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-4w46p_openshift-marketplace(57fceeaa-414d-4570-98fb-2b8a06a7d3bb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:eb2f8e43e60769562d45eeebcd027f8298b39dfa92aed456412d36a28c32bc92: Get \"https://registry.redhat.io/v2/redhat/redhat-operator-index/blobs/sha256:eb2f8e43e60769562d45eeebcd027f8298b39dfa92aed456412d36a28c32bc92\": context canceled" logger="UnhandledError" Jan 25 00:12:29 crc kubenswrapper[4947]: E0125 00:12:29.902461 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:eb2f8e43e60769562d45eeebcd027f8298b39dfa92aed456412d36a28c32bc92: Get \\\"https://registry.redhat.io/v2/redhat/redhat-operator-index/blobs/sha256:eb2f8e43e60769562d45eeebcd027f8298b39dfa92aed456412d36a28c32bc92\\\": context canceled\"" pod="openshift-marketplace/redhat-operators-4w46p" podUID="57fceeaa-414d-4570-98fb-2b8a06a7d3bb" Jan 25 00:12:29 crc kubenswrapper[4947]: E0125 00:12:29.906234 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage381955160/2\": happened during read: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 25 00:12:29 crc kubenswrapper[4947]: E0125 00:12:29.906455 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xj6xh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-2ckt7_openshift-marketplace(47cb5005-6286-4d5c-b654-65009ac6d3d9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage381955160/2\": happened during read: context canceled" logger="UnhandledError" Jan 25 00:12:29 crc kubenswrapper[4947]: E0125 00:12:29.907796 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \\\"/var/tmp/container_images_storage381955160/2\\\": happened during read: context canceled\"" pod="openshift-marketplace/redhat-marketplace-2ckt7" podUID="47cb5005-6286-4d5c-b654-65009ac6d3d9" Jan 25 00:12:29 crc kubenswrapper[4947]: I0125 00:12:29.927948 4947 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-vw66z container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 25 00:12:29 crc kubenswrapper[4947]: I0125 00:12:29.928020 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" podUID="37b7a00e-4def-4d1e-8333-94d15174223b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 25 00:12:29 crc kubenswrapper[4947]: I0125 00:12:29.960543 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" Jan 25 00:12:29 crc kubenswrapper[4947]: I0125 00:12:29.967770 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:12:29 crc kubenswrapper[4947]: E0125 00:12:29.984683 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 25 00:12:29 crc kubenswrapper[4947]: E0125 00:12:29.984901 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tfhzb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-nmrkd_openshift-marketplace(8631ec11-9ab2-4799-b57c-0a346ec69767): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 25 00:12:29 crc kubenswrapper[4947]: E0125 00:12:29.993789 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-nmrkd" podUID="8631ec11-9ab2-4799-b57c-0a346ec69767" Jan 25 00:12:29 crc kubenswrapper[4947]: I0125 00:12:29.994280 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t"] Jan 25 00:12:29 crc kubenswrapper[4947]: E0125 00:12:29.994532 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37b7a00e-4def-4d1e-8333-94d15174223b" containerName="route-controller-manager" Jan 25 00:12:29 crc kubenswrapper[4947]: I0125 00:12:29.994556 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="37b7a00e-4def-4d1e-8333-94d15174223b" containerName="route-controller-manager" Jan 25 00:12:29 crc kubenswrapper[4947]: E0125 00:12:29.994573 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d15a018-8297-4e45-8a62-afa89a267381" containerName="controller-manager" Jan 25 00:12:29 crc kubenswrapper[4947]: I0125 00:12:29.994579 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d15a018-8297-4e45-8a62-afa89a267381" containerName="controller-manager" Jan 25 00:12:29 crc kubenswrapper[4947]: E0125 00:12:29.994589 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4343388d-56d1-4c5b-a26d-f6e582b7818e" containerName="pruner" Jan 25 00:12:29 crc kubenswrapper[4947]: I0125 00:12:29.994596 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="4343388d-56d1-4c5b-a26d-f6e582b7818e" containerName="pruner" Jan 25 00:12:29 crc kubenswrapper[4947]: I0125 00:12:29.994708 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="37b7a00e-4def-4d1e-8333-94d15174223b" containerName="route-controller-manager" Jan 25 00:12:29 crc kubenswrapper[4947]: I0125 00:12:29.994719 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="4343388d-56d1-4c5b-a26d-f6e582b7818e" containerName="pruner" Jan 25 00:12:29 crc kubenswrapper[4947]: I0125 00:12:29.994728 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d15a018-8297-4e45-8a62-afa89a267381" containerName="controller-manager" Jan 25 00:12:29 crc kubenswrapper[4947]: I0125 00:12:29.995190 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.002937 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b7a00e-4def-4d1e-8333-94d15174223b-config\") pod \"37b7a00e-4def-4d1e-8333-94d15174223b\" (UID: \"37b7a00e-4def-4d1e-8333-94d15174223b\") " Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.003004 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x956c\" (UniqueName: \"kubernetes.io/projected/37b7a00e-4def-4d1e-8333-94d15174223b-kube-api-access-x956c\") pod \"37b7a00e-4def-4d1e-8333-94d15174223b\" (UID: \"37b7a00e-4def-4d1e-8333-94d15174223b\") " Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.003026 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37b7a00e-4def-4d1e-8333-94d15174223b-client-ca\") pod \"37b7a00e-4def-4d1e-8333-94d15174223b\" (UID: \"37b7a00e-4def-4d1e-8333-94d15174223b\") " Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.003090 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37b7a00e-4def-4d1e-8333-94d15174223b-serving-cert\") pod \"37b7a00e-4def-4d1e-8333-94d15174223b\" (UID: \"37b7a00e-4def-4d1e-8333-94d15174223b\") " Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.003728 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37b7a00e-4def-4d1e-8333-94d15174223b-config" (OuterVolumeSpecName: "config") pod "37b7a00e-4def-4d1e-8333-94d15174223b" (UID: "37b7a00e-4def-4d1e-8333-94d15174223b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.004229 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t"] Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.004470 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37b7a00e-4def-4d1e-8333-94d15174223b-client-ca" (OuterVolumeSpecName: "client-ca") pod "37b7a00e-4def-4d1e-8333-94d15174223b" (UID: "37b7a00e-4def-4d1e-8333-94d15174223b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.009412 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37b7a00e-4def-4d1e-8333-94d15174223b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "37b7a00e-4def-4d1e-8333-94d15174223b" (UID: "37b7a00e-4def-4d1e-8333-94d15174223b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.023886 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37b7a00e-4def-4d1e-8333-94d15174223b-kube-api-access-x956c" (OuterVolumeSpecName: "kube-api-access-x956c") pod "37b7a00e-4def-4d1e-8333-94d15174223b" (UID: "37b7a00e-4def-4d1e-8333-94d15174223b"). InnerVolumeSpecName "kube-api-access-x956c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.104764 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d15a018-8297-4e45-8a62-afa89a267381-serving-cert\") pod \"9d15a018-8297-4e45-8a62-afa89a267381\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.104863 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d15a018-8297-4e45-8a62-afa89a267381-config\") pod \"9d15a018-8297-4e45-8a62-afa89a267381\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.105031 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9d15a018-8297-4e45-8a62-afa89a267381-proxy-ca-bundles\") pod \"9d15a018-8297-4e45-8a62-afa89a267381\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.105053 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsx9g\" (UniqueName: \"kubernetes.io/projected/9d15a018-8297-4e45-8a62-afa89a267381-kube-api-access-zsx9g\") pod \"9d15a018-8297-4e45-8a62-afa89a267381\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.105084 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d15a018-8297-4e45-8a62-afa89a267381-client-ca\") pod \"9d15a018-8297-4e45-8a62-afa89a267381\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.105402 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-client-ca\") pod \"route-controller-manager-5554874b69-jz72t\" (UID: \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\") " pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.105454 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-serving-cert\") pod \"route-controller-manager-5554874b69-jz72t\" (UID: \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\") " pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.105528 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nrzk\" (UniqueName: \"kubernetes.io/projected/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-kube-api-access-8nrzk\") pod \"route-controller-manager-5554874b69-jz72t\" (UID: \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\") " pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.105602 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-config\") pod \"route-controller-manager-5554874b69-jz72t\" (UID: \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\") " pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.105685 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b7a00e-4def-4d1e-8333-94d15174223b-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.105701 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x956c\" (UniqueName: \"kubernetes.io/projected/37b7a00e-4def-4d1e-8333-94d15174223b-kube-api-access-x956c\") on node \"crc\" DevicePath \"\"" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.105718 4947 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37b7a00e-4def-4d1e-8333-94d15174223b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.105735 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37b7a00e-4def-4d1e-8333-94d15174223b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.106352 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d15a018-8297-4e45-8a62-afa89a267381-client-ca" (OuterVolumeSpecName: "client-ca") pod "9d15a018-8297-4e45-8a62-afa89a267381" (UID: "9d15a018-8297-4e45-8a62-afa89a267381"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.106365 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d15a018-8297-4e45-8a62-afa89a267381-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9d15a018-8297-4e45-8a62-afa89a267381" (UID: "9d15a018-8297-4e45-8a62-afa89a267381"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.106596 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d15a018-8297-4e45-8a62-afa89a267381-config" (OuterVolumeSpecName: "config") pod "9d15a018-8297-4e45-8a62-afa89a267381" (UID: "9d15a018-8297-4e45-8a62-afa89a267381"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.108435 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d15a018-8297-4e45-8a62-afa89a267381-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d15a018-8297-4e45-8a62-afa89a267381" (UID: "9d15a018-8297-4e45-8a62-afa89a267381"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.108498 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d15a018-8297-4e45-8a62-afa89a267381-kube-api-access-zsx9g" (OuterVolumeSpecName: "kube-api-access-zsx9g") pod "9d15a018-8297-4e45-8a62-afa89a267381" (UID: "9d15a018-8297-4e45-8a62-afa89a267381"). InnerVolumeSpecName "kube-api-access-zsx9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.206727 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-config\") pod \"route-controller-manager-5554874b69-jz72t\" (UID: \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\") " pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.206820 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-client-ca\") pod \"route-controller-manager-5554874b69-jz72t\" (UID: \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\") " pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.206857 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-serving-cert\") pod \"route-controller-manager-5554874b69-jz72t\" (UID: \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\") " pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.206937 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nrzk\" (UniqueName: \"kubernetes.io/projected/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-kube-api-access-8nrzk\") pod \"route-controller-manager-5554874b69-jz72t\" (UID: \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\") " pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.207003 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsx9g\" (UniqueName: \"kubernetes.io/projected/9d15a018-8297-4e45-8a62-afa89a267381-kube-api-access-zsx9g\") on node \"crc\" DevicePath \"\"" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.207024 4947 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9d15a018-8297-4e45-8a62-afa89a267381-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.207042 4947 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d15a018-8297-4e45-8a62-afa89a267381-client-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.207060 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d15a018-8297-4e45-8a62-afa89a267381-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.207077 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d15a018-8297-4e45-8a62-afa89a267381-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.208096 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-config\") pod \"route-controller-manager-5554874b69-jz72t\" (UID: \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\") " pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.208800 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-client-ca\") pod \"route-controller-manager-5554874b69-jz72t\" (UID: \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\") " pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.213711 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-serving-cert\") pod \"route-controller-manager-5554874b69-jz72t\" (UID: \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\") " pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.227086 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nrzk\" (UniqueName: \"kubernetes.io/projected/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-kube-api-access-8nrzk\") pod \"route-controller-manager-5554874b69-jz72t\" (UID: \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\") " pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.274655 4947 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-5nql4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": context deadline exceeded" start-of-body= Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.274721 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" podUID="9d15a018-8297-4e45-8a62-afa89a267381" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": context deadline exceeded" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.345857 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.429962 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" event={"ID":"37b7a00e-4def-4d1e-8333-94d15174223b","Type":"ContainerDied","Data":"f071273158cef43a1913f83b87e712c39d955ed385651512fb2fd76cd5e1e89d"} Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.429997 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.430091 4947 scope.go:117] "RemoveContainer" containerID="7403745f150d63c1345cf7e91fe3a3905bd0f9a92d392d4fa7f206d8e146d177" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.436870 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.437031 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" event={"ID":"9d15a018-8297-4e45-8a62-afa89a267381","Type":"ContainerDied","Data":"f8461de37c95d2f68aceb775dc7fe3de76c76f12ffef68dfcf5d69ee41e8f1e4"} Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.453559 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.455539 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.461507 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.461998 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.473099 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.515397 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94d05abe-f768-43d7-abf4-0a7a4e36c37e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"94d05abe-f768-43d7-abf4-0a7a4e36c37e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.516003 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94d05abe-f768-43d7-abf4-0a7a4e36c37e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"94d05abe-f768-43d7-abf4-0a7a4e36c37e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.556769 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5nql4"] Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.564029 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5nql4"] Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.587813 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z"] Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.590481 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z"] Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.617763 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94d05abe-f768-43d7-abf4-0a7a4e36c37e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"94d05abe-f768-43d7-abf4-0a7a4e36c37e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.617851 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94d05abe-f768-43d7-abf4-0a7a4e36c37e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"94d05abe-f768-43d7-abf4-0a7a4e36c37e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.617952 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94d05abe-f768-43d7-abf4-0a7a4e36c37e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"94d05abe-f768-43d7-abf4-0a7a4e36c37e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.634608 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94d05abe-f768-43d7-abf4-0a7a4e36c37e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"94d05abe-f768-43d7-abf4-0a7a4e36c37e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.802981 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 25 00:12:31 crc kubenswrapper[4947]: I0125 00:12:31.106892 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37b7a00e-4def-4d1e-8333-94d15174223b" path="/var/lib/kubelet/pods/37b7a00e-4def-4d1e-8333-94d15174223b/volumes" Jan 25 00:12:31 crc kubenswrapper[4947]: I0125 00:12:31.108385 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d15a018-8297-4e45-8a62-afa89a267381" path="/var/lib/kubelet/pods/9d15a018-8297-4e45-8a62-afa89a267381/volumes" Jan 25 00:12:31 crc kubenswrapper[4947]: E0125 00:12:31.563186 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 25 00:12:31 crc kubenswrapper[4947]: E0125 00:12:31.563362 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7ndnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-ltw77_openshift-marketplace(49263faf-29f4-481c-aafd-a271a29c209a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 25 00:12:31 crc kubenswrapper[4947]: E0125 00:12:31.564543 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-ltw77" podUID="49263faf-29f4-481c-aafd-a271a29c209a" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.168925 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5687556c4c-8vd78"] Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.169809 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.173142 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.175523 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.176391 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.176218 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.177843 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.180807 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.181212 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5687556c4c-8vd78"] Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.185578 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.238902 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6fvp\" (UniqueName: \"kubernetes.io/projected/494402be-6a25-4b8d-a515-de9eba8f1d31-kube-api-access-w6fvp\") pod \"controller-manager-5687556c4c-8vd78\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.239051 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/494402be-6a25-4b8d-a515-de9eba8f1d31-client-ca\") pod \"controller-manager-5687556c4c-8vd78\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.239097 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/494402be-6a25-4b8d-a515-de9eba8f1d31-config\") pod \"controller-manager-5687556c4c-8vd78\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.239187 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/494402be-6a25-4b8d-a515-de9eba8f1d31-proxy-ca-bundles\") pod \"controller-manager-5687556c4c-8vd78\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.239243 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/494402be-6a25-4b8d-a515-de9eba8f1d31-serving-cert\") pod \"controller-manager-5687556c4c-8vd78\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.340455 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6fvp\" (UniqueName: \"kubernetes.io/projected/494402be-6a25-4b8d-a515-de9eba8f1d31-kube-api-access-w6fvp\") pod \"controller-manager-5687556c4c-8vd78\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.340513 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/494402be-6a25-4b8d-a515-de9eba8f1d31-client-ca\") pod \"controller-manager-5687556c4c-8vd78\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.340532 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/494402be-6a25-4b8d-a515-de9eba8f1d31-config\") pod \"controller-manager-5687556c4c-8vd78\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.340556 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/494402be-6a25-4b8d-a515-de9eba8f1d31-proxy-ca-bundles\") pod \"controller-manager-5687556c4c-8vd78\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.341648 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/494402be-6a25-4b8d-a515-de9eba8f1d31-proxy-ca-bundles\") pod \"controller-manager-5687556c4c-8vd78\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.341756 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/494402be-6a25-4b8d-a515-de9eba8f1d31-config\") pod \"controller-manager-5687556c4c-8vd78\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.341844 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/494402be-6a25-4b8d-a515-de9eba8f1d31-serving-cert\") pod \"controller-manager-5687556c4c-8vd78\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.341904 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/494402be-6a25-4b8d-a515-de9eba8f1d31-client-ca\") pod \"controller-manager-5687556c4c-8vd78\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.345110 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/494402be-6a25-4b8d-a515-de9eba8f1d31-serving-cert\") pod \"controller-manager-5687556c4c-8vd78\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.355337 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6fvp\" (UniqueName: \"kubernetes.io/projected/494402be-6a25-4b8d-a515-de9eba8f1d31-kube-api-access-w6fvp\") pod \"controller-manager-5687556c4c-8vd78\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.507706 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:12:35 crc kubenswrapper[4947]: I0125 00:12:35.845340 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 25 00:12:35 crc kubenswrapper[4947]: I0125 00:12:35.846805 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 25 00:12:35 crc kubenswrapper[4947]: I0125 00:12:35.855318 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 25 00:12:35 crc kubenswrapper[4947]: E0125 00:12:35.991191 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-ltw77" podUID="49263faf-29f4-481c-aafd-a271a29c209a" Jan 25 00:12:35 crc kubenswrapper[4947]: E0125 00:12:35.991447 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-nmrkd" podUID="8631ec11-9ab2-4799-b57c-0a346ec69767" Jan 25 00:12:35 crc kubenswrapper[4947]: E0125 00:12:35.991518 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-4w46p" podUID="57fceeaa-414d-4570-98fb-2b8a06a7d3bb" Jan 25 00:12:35 crc kubenswrapper[4947]: E0125 00:12:35.991633 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2ckt7" podUID="47cb5005-6286-4d5c-b654-65009ac6d3d9" Jan 25 00:12:35 crc kubenswrapper[4947]: I0125 00:12:35.996250 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3305e0ba-7064-415c-bbaa-bdc630d95e40-var-lock\") pod \"installer-9-crc\" (UID: \"3305e0ba-7064-415c-bbaa-bdc630d95e40\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 25 00:12:35 crc kubenswrapper[4947]: I0125 00:12:35.996344 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3305e0ba-7064-415c-bbaa-bdc630d95e40-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3305e0ba-7064-415c-bbaa-bdc630d95e40\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 25 00:12:35 crc kubenswrapper[4947]: I0125 00:12:35.996449 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3305e0ba-7064-415c-bbaa-bdc630d95e40-kube-api-access\") pod \"installer-9-crc\" (UID: \"3305e0ba-7064-415c-bbaa-bdc630d95e40\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 25 00:12:36 crc kubenswrapper[4947]: E0125 00:12:36.084935 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 25 00:12:36 crc kubenswrapper[4947]: E0125 00:12:36.085638 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lszc2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-4vzx6_openshift-marketplace(4fbe2fc7-f0a5-439c-988c-d034d3da6add): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 25 00:12:36 crc kubenswrapper[4947]: E0125 00:12:36.086875 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-4vzx6" podUID="4fbe2fc7-f0a5-439c-988c-d034d3da6add" Jan 25 00:12:36 crc kubenswrapper[4947]: I0125 00:12:36.097852 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3305e0ba-7064-415c-bbaa-bdc630d95e40-var-lock\") pod \"installer-9-crc\" (UID: \"3305e0ba-7064-415c-bbaa-bdc630d95e40\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 25 00:12:36 crc kubenswrapper[4947]: I0125 00:12:36.097926 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3305e0ba-7064-415c-bbaa-bdc630d95e40-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3305e0ba-7064-415c-bbaa-bdc630d95e40\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 25 00:12:36 crc kubenswrapper[4947]: I0125 00:12:36.098001 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3305e0ba-7064-415c-bbaa-bdc630d95e40-kube-api-access\") pod \"installer-9-crc\" (UID: \"3305e0ba-7064-415c-bbaa-bdc630d95e40\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 25 00:12:36 crc kubenswrapper[4947]: I0125 00:12:36.098506 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3305e0ba-7064-415c-bbaa-bdc630d95e40-var-lock\") pod \"installer-9-crc\" (UID: \"3305e0ba-7064-415c-bbaa-bdc630d95e40\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 25 00:12:36 crc kubenswrapper[4947]: I0125 00:12:36.098569 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3305e0ba-7064-415c-bbaa-bdc630d95e40-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3305e0ba-7064-415c-bbaa-bdc630d95e40\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 25 00:12:36 crc kubenswrapper[4947]: I0125 00:12:36.121974 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3305e0ba-7064-415c-bbaa-bdc630d95e40-kube-api-access\") pod \"installer-9-crc\" (UID: \"3305e0ba-7064-415c-bbaa-bdc630d95e40\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 25 00:12:36 crc kubenswrapper[4947]: I0125 00:12:36.180549 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 25 00:12:37 crc kubenswrapper[4947]: E0125 00:12:37.450106 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-4vzx6" podUID="4fbe2fc7-f0a5-439c-988c-d034d3da6add" Jan 25 00:12:37 crc kubenswrapper[4947]: I0125 00:12:37.479338 4947 scope.go:117] "RemoveContainer" containerID="b8157486549c7153c6f440e775308e1931565402eac4d25b57d51cfdfab72be2" Jan 25 00:12:37 crc kubenswrapper[4947]: I0125 00:12:37.480622 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29488320-jf979" event={"ID":"9e4f53a6-fcc3-4310-965d-9a5dda91080b","Type":"ContainerDied","Data":"3ad4af98bee02e97732bce66ef98812c3ca3ae679230f9721b2db8e405f6defc"} Jan 25 00:12:37 crc kubenswrapper[4947]: I0125 00:12:37.480661 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ad4af98bee02e97732bce66ef98812c3ca3ae679230f9721b2db8e405f6defc" Jan 25 00:12:37 crc kubenswrapper[4947]: I0125 00:12:37.505256 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29488320-jf979" Jan 25 00:12:37 crc kubenswrapper[4947]: E0125 00:12:37.514871 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 25 00:12:37 crc kubenswrapper[4947]: E0125 00:12:37.515033 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8th5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-wwwnp_openshift-marketplace(06282146-8047-4104-b189-c896e5b7f8b9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 25 00:12:37 crc kubenswrapper[4947]: E0125 00:12:37.518844 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-wwwnp" podUID="06282146-8047-4104-b189-c896e5b7f8b9" Jan 25 00:12:37 crc kubenswrapper[4947]: E0125 00:12:37.562058 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 25 00:12:37 crc kubenswrapper[4947]: E0125 00:12:37.562545 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fhdwv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-47m2l_openshift-marketplace(ad96bcad-395b-4844-9992-00acdf7436c2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 25 00:12:37 crc kubenswrapper[4947]: E0125 00:12:37.563888 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-47m2l" podUID="ad96bcad-395b-4844-9992-00acdf7436c2" Jan 25 00:12:37 crc kubenswrapper[4947]: I0125 00:12:37.635883 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9e4f53a6-fcc3-4310-965d-9a5dda91080b-serviceca\") pod \"9e4f53a6-fcc3-4310-965d-9a5dda91080b\" (UID: \"9e4f53a6-fcc3-4310-965d-9a5dda91080b\") " Jan 25 00:12:37 crc kubenswrapper[4947]: I0125 00:12:37.635931 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6msz\" (UniqueName: \"kubernetes.io/projected/9e4f53a6-fcc3-4310-965d-9a5dda91080b-kube-api-access-k6msz\") pod \"9e4f53a6-fcc3-4310-965d-9a5dda91080b\" (UID: \"9e4f53a6-fcc3-4310-965d-9a5dda91080b\") " Jan 25 00:12:37 crc kubenswrapper[4947]: I0125 00:12:37.637174 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e4f53a6-fcc3-4310-965d-9a5dda91080b-serviceca" (OuterVolumeSpecName: "serviceca") pod "9e4f53a6-fcc3-4310-965d-9a5dda91080b" (UID: "9e4f53a6-fcc3-4310-965d-9a5dda91080b"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:12:37 crc kubenswrapper[4947]: I0125 00:12:37.650031 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e4f53a6-fcc3-4310-965d-9a5dda91080b-kube-api-access-k6msz" (OuterVolumeSpecName: "kube-api-access-k6msz") pod "9e4f53a6-fcc3-4310-965d-9a5dda91080b" (UID: "9e4f53a6-fcc3-4310-965d-9a5dda91080b"). InnerVolumeSpecName "kube-api-access-k6msz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:12:37 crc kubenswrapper[4947]: I0125 00:12:37.738701 4947 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9e4f53a6-fcc3-4310-965d-9a5dda91080b-serviceca\") on node \"crc\" DevicePath \"\"" Jan 25 00:12:37 crc kubenswrapper[4947]: I0125 00:12:37.738737 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6msz\" (UniqueName: \"kubernetes.io/projected/9e4f53a6-fcc3-4310-965d-9a5dda91080b-kube-api-access-k6msz\") on node \"crc\" DevicePath \"\"" Jan 25 00:12:37 crc kubenswrapper[4947]: I0125 00:12:37.896259 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hj7kb"] Jan 25 00:12:37 crc kubenswrapper[4947]: W0125 00:12:37.900905 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a64fbf1_68fc_4379_9bb7_009c4f2cc812.slice/crio-e1697cdf50157c54be0fee186f9fa041d65aada475b62821b2409ec373f611b6 WatchSource:0}: Error finding container e1697cdf50157c54be0fee186f9fa041d65aada475b62821b2409ec373f611b6: Status 404 returned error can't find the container with id e1697cdf50157c54be0fee186f9fa041d65aada475b62821b2409ec373f611b6 Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.004570 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.010961 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5687556c4c-8vd78"] Jan 25 00:12:38 crc kubenswrapper[4947]: W0125 00:12:38.015408 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod94d05abe_f768_43d7_abf4_0a7a4e36c37e.slice/crio-0d6738675ab309ecb5f97f7f0380be43c80274292c8c75612a5f1dcb0f6499cb WatchSource:0}: Error finding container 0d6738675ab309ecb5f97f7f0380be43c80274292c8c75612a5f1dcb0f6499cb: Status 404 returned error can't find the container with id 0d6738675ab309ecb5f97f7f0380be43c80274292c8c75612a5f1dcb0f6499cb Jan 25 00:12:38 crc kubenswrapper[4947]: W0125 00:12:38.015726 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod494402be_6a25_4b8d_a515_de9eba8f1d31.slice/crio-4c061a35ba34d1fb75aea276081e1c5742b11fc541423a49284514c05ed48d3b WatchSource:0}: Error finding container 4c061a35ba34d1fb75aea276081e1c5742b11fc541423a49284514c05ed48d3b: Status 404 returned error can't find the container with id 4c061a35ba34d1fb75aea276081e1c5742b11fc541423a49284514c05ed48d3b Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.059702 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.067250 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t"] Jan 25 00:12:38 crc kubenswrapper[4947]: W0125 00:12:38.068295 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3305e0ba_7064_415c_bbaa_bdc630d95e40.slice/crio-c860d5f033f7c7c376e626eb2cc279e42901878e52850a69646c280979b6502c WatchSource:0}: Error finding container c860d5f033f7c7c376e626eb2cc279e42901878e52850a69646c280979b6502c: Status 404 returned error can't find the container with id c860d5f033f7c7c376e626eb2cc279e42901878e52850a69646c280979b6502c Jan 25 00:12:38 crc kubenswrapper[4947]: W0125 00:12:38.084478 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1842ab3_9eb3_4aa3_b77f_ee74e120fe47.slice/crio-7fa469b1f93c6d5d71121e93f8cc7b379adfec6c3fbf576d48d60d1ba1c8315e WatchSource:0}: Error finding container 7fa469b1f93c6d5d71121e93f8cc7b379adfec6c3fbf576d48d60d1ba1c8315e: Status 404 returned error can't find the container with id 7fa469b1f93c6d5d71121e93f8cc7b379adfec6c3fbf576d48d60d1ba1c8315e Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.487327 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" event={"ID":"494402be-6a25-4b8d-a515-de9eba8f1d31","Type":"ContainerStarted","Data":"1c028832216129582df30e9ae1e01392e885f0edd17b2831b6b48bd47fc1c5c0"} Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.487723 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.487734 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" event={"ID":"494402be-6a25-4b8d-a515-de9eba8f1d31","Type":"ContainerStarted","Data":"4c061a35ba34d1fb75aea276081e1c5742b11fc541423a49284514c05ed48d3b"} Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.491613 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hj7kb" event={"ID":"9a64fbf1-68fc-4379-9bb7-009c4f2cc812","Type":"ContainerStarted","Data":"6cf2f71bea8dc343fff30434b9c7ab220a8da9ed448baca807433dff49bfd524"} Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.491646 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hj7kb" event={"ID":"9a64fbf1-68fc-4379-9bb7-009c4f2cc812","Type":"ContainerStarted","Data":"a4e20ea7aa6163a8d9dd3f1dd01209a3a318ee65a41e958e76b459a269b88398"} Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.491658 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hj7kb" event={"ID":"9a64fbf1-68fc-4379-9bb7-009c4f2cc812","Type":"ContainerStarted","Data":"e1697cdf50157c54be0fee186f9fa041d65aada475b62821b2409ec373f611b6"} Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.493053 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.493407 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" event={"ID":"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47","Type":"ContainerStarted","Data":"6c1d7cc945b820b218fa75e06214182e9d3f500a0874845bdee9db322e9cef77"} Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.493425 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" event={"ID":"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47","Type":"ContainerStarted","Data":"7fa469b1f93c6d5d71121e93f8cc7b379adfec6c3fbf576d48d60d1ba1c8315e"} Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.493931 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.495696 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3305e0ba-7064-415c-bbaa-bdc630d95e40","Type":"ContainerStarted","Data":"34c655a70626cb0470c8341f4426a959f7be73c9bd302d5c7f42a0999b60f186"} Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.495721 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3305e0ba-7064-415c-bbaa-bdc630d95e40","Type":"ContainerStarted","Data":"c860d5f033f7c7c376e626eb2cc279e42901878e52850a69646c280979b6502c"} Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.499506 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"94d05abe-f768-43d7-abf4-0a7a4e36c37e","Type":"ContainerStarted","Data":"3be61bec2372426fb30ddc693384b0919273401e3137f16957f1612fd7428fda"} Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.499532 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"94d05abe-f768-43d7-abf4-0a7a4e36c37e","Type":"ContainerStarted","Data":"0d6738675ab309ecb5f97f7f0380be43c80274292c8c75612a5f1dcb0f6499cb"} Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.499579 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29488320-jf979" Jan 25 00:12:38 crc kubenswrapper[4947]: E0125 00:12:38.503161 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-47m2l" podUID="ad96bcad-395b-4844-9992-00acdf7436c2" Jan 25 00:12:38 crc kubenswrapper[4947]: E0125 00:12:38.504486 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wwwnp" podUID="06282146-8047-4104-b189-c896e5b7f8b9" Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.507206 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" podStartSLOduration=13.50718859 podStartE2EDuration="13.50718859s" podCreationTimestamp="2026-01-25 00:12:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:12:38.503207176 +0000 UTC m=+197.736197606" watchObservedRunningTime="2026-01-25 00:12:38.50718859 +0000 UTC m=+197.740179030" Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.552692 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" podStartSLOduration=13.552671626 podStartE2EDuration="13.552671626s" podCreationTimestamp="2026-01-25 00:12:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:12:38.550344754 +0000 UTC m=+197.783335204" watchObservedRunningTime="2026-01-25 00:12:38.552671626 +0000 UTC m=+197.785662066" Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.573702 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.5736870080000003 podStartE2EDuration="3.573687008s" podCreationTimestamp="2026-01-25 00:12:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:12:38.57077025 +0000 UTC m=+197.803760690" watchObservedRunningTime="2026-01-25 00:12:38.573687008 +0000 UTC m=+197.806677448" Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.610708 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-hj7kb" podStartSLOduration=177.61069034 podStartE2EDuration="2m57.61069034s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:12:38.587190002 +0000 UTC m=+197.820180442" watchObservedRunningTime="2026-01-25 00:12:38.61069034 +0000 UTC m=+197.843680780" Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.649672 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=8.649650823 podStartE2EDuration="8.649650823s" podCreationTimestamp="2026-01-25 00:12:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:12:38.642472535 +0000 UTC m=+197.875462975" watchObservedRunningTime="2026-01-25 00:12:38.649650823 +0000 UTC m=+197.882641263" Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.882223 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" Jan 25 00:12:39 crc kubenswrapper[4947]: I0125 00:12:39.504845 4947 generic.go:334] "Generic (PLEG): container finished" podID="94d05abe-f768-43d7-abf4-0a7a4e36c37e" containerID="3be61bec2372426fb30ddc693384b0919273401e3137f16957f1612fd7428fda" exitCode=0 Jan 25 00:12:39 crc kubenswrapper[4947]: I0125 00:12:39.504954 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"94d05abe-f768-43d7-abf4-0a7a4e36c37e","Type":"ContainerDied","Data":"3be61bec2372426fb30ddc693384b0919273401e3137f16957f1612fd7428fda"} Jan 25 00:12:40 crc kubenswrapper[4947]: I0125 00:12:40.843383 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 25 00:12:40 crc kubenswrapper[4947]: I0125 00:12:40.980537 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94d05abe-f768-43d7-abf4-0a7a4e36c37e-kubelet-dir\") pod \"94d05abe-f768-43d7-abf4-0a7a4e36c37e\" (UID: \"94d05abe-f768-43d7-abf4-0a7a4e36c37e\") " Jan 25 00:12:40 crc kubenswrapper[4947]: I0125 00:12:40.980605 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94d05abe-f768-43d7-abf4-0a7a4e36c37e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "94d05abe-f768-43d7-abf4-0a7a4e36c37e" (UID: "94d05abe-f768-43d7-abf4-0a7a4e36c37e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:12:40 crc kubenswrapper[4947]: I0125 00:12:40.980676 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94d05abe-f768-43d7-abf4-0a7a4e36c37e-kube-api-access\") pod \"94d05abe-f768-43d7-abf4-0a7a4e36c37e\" (UID: \"94d05abe-f768-43d7-abf4-0a7a4e36c37e\") " Jan 25 00:12:40 crc kubenswrapper[4947]: I0125 00:12:40.980949 4947 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94d05abe-f768-43d7-abf4-0a7a4e36c37e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 25 00:12:40 crc kubenswrapper[4947]: I0125 00:12:40.990474 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94d05abe-f768-43d7-abf4-0a7a4e36c37e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "94d05abe-f768-43d7-abf4-0a7a4e36c37e" (UID: "94d05abe-f768-43d7-abf4-0a7a4e36c37e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:12:41 crc kubenswrapper[4947]: I0125 00:12:41.082058 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94d05abe-f768-43d7-abf4-0a7a4e36c37e-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 25 00:12:41 crc kubenswrapper[4947]: I0125 00:12:41.543824 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"94d05abe-f768-43d7-abf4-0a7a4e36c37e","Type":"ContainerDied","Data":"0d6738675ab309ecb5f97f7f0380be43c80274292c8c75612a5f1dcb0f6499cb"} Jan 25 00:12:41 crc kubenswrapper[4947]: I0125 00:12:41.544429 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 25 00:12:41 crc kubenswrapper[4947]: I0125 00:12:41.546173 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d6738675ab309ecb5f97f7f0380be43c80274292c8c75612a5f1dcb0f6499cb" Jan 25 00:12:42 crc kubenswrapper[4947]: I0125 00:12:42.550631 4947 generic.go:334] "Generic (PLEG): container finished" podID="900aeb01-050c-45b8-936c-e5f8d73ebeb5" containerID="43d814190eebb93dd6bba60983f167e4e7d8eb0cd7dcc3adc29725b6c2f90c2b" exitCode=0 Jan 25 00:12:42 crc kubenswrapper[4947]: I0125 00:12:42.550739 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qzj76" event={"ID":"900aeb01-050c-45b8-936c-e5f8d73ebeb5","Type":"ContainerDied","Data":"43d814190eebb93dd6bba60983f167e4e7d8eb0cd7dcc3adc29725b6c2f90c2b"} Jan 25 00:12:43 crc kubenswrapper[4947]: I0125 00:12:43.558763 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qzj76" event={"ID":"900aeb01-050c-45b8-936c-e5f8d73ebeb5","Type":"ContainerStarted","Data":"625f0cef66386e28287f85cd89001a6182e1a9e32fef13144f9306b8fd3bf8e1"} Jan 25 00:12:43 crc kubenswrapper[4947]: I0125 00:12:43.577345 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qzj76" podStartSLOduration=4.243234018 podStartE2EDuration="56.577329434s" podCreationTimestamp="2026-01-25 00:11:47 +0000 UTC" firstStartedPulling="2026-01-25 00:11:50.742758844 +0000 UTC m=+149.975749285" lastFinishedPulling="2026-01-25 00:12:43.076854261 +0000 UTC m=+202.309844701" observedRunningTime="2026-01-25 00:12:43.576413669 +0000 UTC m=+202.809404149" watchObservedRunningTime="2026-01-25 00:12:43.577329434 +0000 UTC m=+202.810319874" Jan 25 00:12:47 crc kubenswrapper[4947]: I0125 00:12:47.072571 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:12:47 crc kubenswrapper[4947]: I0125 00:12:47.073049 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:12:47 crc kubenswrapper[4947]: I0125 00:12:47.073120 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:12:47 crc kubenswrapper[4947]: I0125 00:12:47.073866 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc"} pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 25 00:12:47 crc kubenswrapper[4947]: I0125 00:12:47.074033 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" containerID="cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc" gracePeriod=600 Jan 25 00:12:47 crc kubenswrapper[4947]: I0125 00:12:47.581703 4947 generic.go:334] "Generic (PLEG): container finished" podID="47cb5005-6286-4d5c-b654-65009ac6d3d9" containerID="f0e8d23d69c130c02800100eadb68b2223ac406b1ec604b6d679944a469596fc" exitCode=0 Jan 25 00:12:47 crc kubenswrapper[4947]: I0125 00:12:47.581769 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ckt7" event={"ID":"47cb5005-6286-4d5c-b654-65009ac6d3d9","Type":"ContainerDied","Data":"f0e8d23d69c130c02800100eadb68b2223ac406b1ec604b6d679944a469596fc"} Jan 25 00:12:47 crc kubenswrapper[4947]: I0125 00:12:47.583302 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerID="d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc" exitCode=0 Jan 25 00:12:47 crc kubenswrapper[4947]: I0125 00:12:47.583344 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerDied","Data":"d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc"} Jan 25 00:12:48 crc kubenswrapper[4947]: I0125 00:12:48.413331 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qzj76" Jan 25 00:12:48 crc kubenswrapper[4947]: I0125 00:12:48.413707 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qzj76" Jan 25 00:12:48 crc kubenswrapper[4947]: I0125 00:12:48.571337 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qzj76" Jan 25 00:12:48 crc kubenswrapper[4947]: I0125 00:12:48.605091 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerStarted","Data":"6a30b7c64683957a95e052711b4800e7471651a4fd592c025524ce839a16c59c"} Jan 25 00:12:48 crc kubenswrapper[4947]: I0125 00:12:48.652502 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qzj76" Jan 25 00:12:49 crc kubenswrapper[4947]: I0125 00:12:49.612806 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltw77" event={"ID":"49263faf-29f4-481c-aafd-a271a29c209a","Type":"ContainerStarted","Data":"4542217595dd75eebca5222fcb5217ae0cff8df4479ef7c5469cd15bd336ba16"} Jan 25 00:12:49 crc kubenswrapper[4947]: I0125 00:12:49.615380 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ckt7" event={"ID":"47cb5005-6286-4d5c-b654-65009ac6d3d9","Type":"ContainerStarted","Data":"317126b2d6e4f76d49bc8f58d4a6611c35dbaae31512dbd0e9d9b571103fb945"} Jan 25 00:12:49 crc kubenswrapper[4947]: I0125 00:12:49.650423 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2ckt7" podStartSLOduration=4.049122811 podStartE2EDuration="59.650403988s" podCreationTimestamp="2026-01-25 00:11:50 +0000 UTC" firstStartedPulling="2026-01-25 00:11:52.969887203 +0000 UTC m=+152.202877643" lastFinishedPulling="2026-01-25 00:12:48.57116838 +0000 UTC m=+207.804158820" observedRunningTime="2026-01-25 00:12:49.649330979 +0000 UTC m=+208.882321439" watchObservedRunningTime="2026-01-25 00:12:49.650403988 +0000 UTC m=+208.883394428" Jan 25 00:12:50 crc kubenswrapper[4947]: I0125 00:12:50.624285 4947 generic.go:334] "Generic (PLEG): container finished" podID="49263faf-29f4-481c-aafd-a271a29c209a" containerID="4542217595dd75eebca5222fcb5217ae0cff8df4479ef7c5469cd15bd336ba16" exitCode=0 Jan 25 00:12:50 crc kubenswrapper[4947]: I0125 00:12:50.624525 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltw77" event={"ID":"49263faf-29f4-481c-aafd-a271a29c209a","Type":"ContainerDied","Data":"4542217595dd75eebca5222fcb5217ae0cff8df4479ef7c5469cd15bd336ba16"} Jan 25 00:12:50 crc kubenswrapper[4947]: I0125 00:12:50.632929 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4w46p" event={"ID":"57fceeaa-414d-4570-98fb-2b8a06a7d3bb","Type":"ContainerDied","Data":"7d1970bbd2b42fe7877661c52dbc7fbf441aa3e65a5ef8dfe46f639caa2e9c08"} Jan 25 00:12:50 crc kubenswrapper[4947]: I0125 00:12:50.633165 4947 generic.go:334] "Generic (PLEG): container finished" podID="57fceeaa-414d-4570-98fb-2b8a06a7d3bb" containerID="7d1970bbd2b42fe7877661c52dbc7fbf441aa3e65a5ef8dfe46f639caa2e9c08" exitCode=0 Jan 25 00:12:50 crc kubenswrapper[4947]: I0125 00:12:50.635344 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2ckt7" Jan 25 00:12:50 crc kubenswrapper[4947]: I0125 00:12:50.635407 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2ckt7" Jan 25 00:12:50 crc kubenswrapper[4947]: I0125 00:12:50.636000 4947 generic.go:334] "Generic (PLEG): container finished" podID="06282146-8047-4104-b189-c896e5b7f8b9" containerID="9e14cc4195bf11e8c81096f8e93eb602df46809eac413c4e97d8f0951f263a3e" exitCode=0 Jan 25 00:12:50 crc kubenswrapper[4947]: I0125 00:12:50.636663 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wwwnp" event={"ID":"06282146-8047-4104-b189-c896e5b7f8b9","Type":"ContainerDied","Data":"9e14cc4195bf11e8c81096f8e93eb602df46809eac413c4e97d8f0951f263a3e"} Jan 25 00:12:50 crc kubenswrapper[4947]: I0125 00:12:50.698036 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2ckt7" Jan 25 00:12:52 crc kubenswrapper[4947]: I0125 00:12:52.649247 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wwwnp" event={"ID":"06282146-8047-4104-b189-c896e5b7f8b9","Type":"ContainerStarted","Data":"8115b93e0fd4495a2b68325eacc2bd399adaeffd03e7ffbff447c266b62f3c8b"} Jan 25 00:12:52 crc kubenswrapper[4947]: I0125 00:12:52.667371 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wwwnp" podStartSLOduration=3.668193039 podStartE2EDuration="1m3.667351589s" podCreationTimestamp="2026-01-25 00:11:49 +0000 UTC" firstStartedPulling="2026-01-25 00:11:51.840334099 +0000 UTC m=+151.073324539" lastFinishedPulling="2026-01-25 00:12:51.839492659 +0000 UTC m=+211.072483089" observedRunningTime="2026-01-25 00:12:52.665142709 +0000 UTC m=+211.898133159" watchObservedRunningTime="2026-01-25 00:12:52.667351589 +0000 UTC m=+211.900342029" Jan 25 00:12:53 crc kubenswrapper[4947]: I0125 00:12:53.656912 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4w46p" event={"ID":"57fceeaa-414d-4570-98fb-2b8a06a7d3bb","Type":"ContainerStarted","Data":"cda1828998251daaa0955fcd8ff8f0139c55af4883ef564ee55bdf29defa9273"} Jan 25 00:12:54 crc kubenswrapper[4947]: I0125 00:12:54.664720 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4vzx6" event={"ID":"4fbe2fc7-f0a5-439c-988c-d034d3da6add","Type":"ContainerStarted","Data":"df53c6d3951ba0b238c3365e425e80c0a7a77e349ffdbd43a64e089db4db9985"} Jan 25 00:12:54 crc kubenswrapper[4947]: I0125 00:12:54.667232 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltw77" event={"ID":"49263faf-29f4-481c-aafd-a271a29c209a","Type":"ContainerStarted","Data":"482fba3b03a69741fd55f9c49368ba133c9fd2f319ef4e1223d2e1af6bddd6d3"} Jan 25 00:12:54 crc kubenswrapper[4947]: I0125 00:12:54.669413 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47m2l" event={"ID":"ad96bcad-395b-4844-9992-00acdf7436c2","Type":"ContainerStarted","Data":"f14331188acade273ec58d376546a2909cdf07b547bdce264ef978c7e825b2d7"} Jan 25 00:12:54 crc kubenswrapper[4947]: I0125 00:12:54.671297 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmrkd" event={"ID":"8631ec11-9ab2-4799-b57c-0a346ec69767","Type":"ContainerStarted","Data":"5fe512aead52ea5a24493594687649dde048b801b45e7de2674ff17753b04147"} Jan 25 00:12:54 crc kubenswrapper[4947]: I0125 00:12:54.754835 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4w46p" podStartSLOduration=5.414349168 podStartE2EDuration="1m4.754814415s" podCreationTimestamp="2026-01-25 00:11:50 +0000 UTC" firstStartedPulling="2026-01-25 00:11:52.973012525 +0000 UTC m=+152.206002965" lastFinishedPulling="2026-01-25 00:12:52.313477782 +0000 UTC m=+211.546468212" observedRunningTime="2026-01-25 00:12:54.751484484 +0000 UTC m=+213.984474924" watchObservedRunningTime="2026-01-25 00:12:54.754814415 +0000 UTC m=+213.987804855" Jan 25 00:12:54 crc kubenswrapper[4947]: I0125 00:12:54.779810 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ltw77" podStartSLOduration=4.993785403 podStartE2EDuration="1m4.779789705s" podCreationTimestamp="2026-01-25 00:11:50 +0000 UTC" firstStartedPulling="2026-01-25 00:11:52.933040104 +0000 UTC m=+152.166030544" lastFinishedPulling="2026-01-25 00:12:52.719044406 +0000 UTC m=+211.952034846" observedRunningTime="2026-01-25 00:12:54.775963819 +0000 UTC m=+214.008954259" watchObservedRunningTime="2026-01-25 00:12:54.779789705 +0000 UTC m=+214.012780155" Jan 25 00:12:55 crc kubenswrapper[4947]: I0125 00:12:55.677614 4947 generic.go:334] "Generic (PLEG): container finished" podID="ad96bcad-395b-4844-9992-00acdf7436c2" containerID="f14331188acade273ec58d376546a2909cdf07b547bdce264ef978c7e825b2d7" exitCode=0 Jan 25 00:12:55 crc kubenswrapper[4947]: I0125 00:12:55.677691 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47m2l" event={"ID":"ad96bcad-395b-4844-9992-00acdf7436c2","Type":"ContainerDied","Data":"f14331188acade273ec58d376546a2909cdf07b547bdce264ef978c7e825b2d7"} Jan 25 00:12:55 crc kubenswrapper[4947]: I0125 00:12:55.680712 4947 generic.go:334] "Generic (PLEG): container finished" podID="8631ec11-9ab2-4799-b57c-0a346ec69767" containerID="5fe512aead52ea5a24493594687649dde048b801b45e7de2674ff17753b04147" exitCode=0 Jan 25 00:12:55 crc kubenswrapper[4947]: I0125 00:12:55.680780 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmrkd" event={"ID":"8631ec11-9ab2-4799-b57c-0a346ec69767","Type":"ContainerDied","Data":"5fe512aead52ea5a24493594687649dde048b801b45e7de2674ff17753b04147"} Jan 25 00:12:55 crc kubenswrapper[4947]: I0125 00:12:55.686361 4947 generic.go:334] "Generic (PLEG): container finished" podID="4fbe2fc7-f0a5-439c-988c-d034d3da6add" containerID="df53c6d3951ba0b238c3365e425e80c0a7a77e349ffdbd43a64e089db4db9985" exitCode=0 Jan 25 00:12:55 crc kubenswrapper[4947]: I0125 00:12:55.686396 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4vzx6" event={"ID":"4fbe2fc7-f0a5-439c-988c-d034d3da6add","Type":"ContainerDied","Data":"df53c6d3951ba0b238c3365e425e80c0a7a77e349ffdbd43a64e089db4db9985"} Jan 25 00:12:59 crc kubenswrapper[4947]: I0125 00:12:59.709923 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47m2l" event={"ID":"ad96bcad-395b-4844-9992-00acdf7436c2","Type":"ContainerStarted","Data":"d9ef5b68bf9e0e13efe5f814ee090df8faa653dec0c7b54fa9b8283ea6f30289"} Jan 25 00:12:59 crc kubenswrapper[4947]: I0125 00:12:59.714519 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmrkd" event={"ID":"8631ec11-9ab2-4799-b57c-0a346ec69767","Type":"ContainerStarted","Data":"3479ef8ea86ed5aed24831846b55d2a592235e2a9ef11acc31185ab9e1a53686"} Jan 25 00:12:59 crc kubenswrapper[4947]: I0125 00:12:59.718119 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4vzx6" event={"ID":"4fbe2fc7-f0a5-439c-988c-d034d3da6add","Type":"ContainerStarted","Data":"22c7ad1b6fc7e1b573a4edb484a50e425db99597935d2e89206ac4738f31db14"} Jan 25 00:12:59 crc kubenswrapper[4947]: I0125 00:12:59.740968 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-47m2l" podStartSLOduration=5.510123453 podStartE2EDuration="1m12.740941888s" podCreationTimestamp="2026-01-25 00:11:47 +0000 UTC" firstStartedPulling="2026-01-25 00:11:50.677154621 +0000 UTC m=+149.910145061" lastFinishedPulling="2026-01-25 00:12:57.907973046 +0000 UTC m=+217.140963496" observedRunningTime="2026-01-25 00:12:59.735852858 +0000 UTC m=+218.968843298" watchObservedRunningTime="2026-01-25 00:12:59.740941888 +0000 UTC m=+218.973932318" Jan 25 00:12:59 crc kubenswrapper[4947]: I0125 00:12:59.758877 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4vzx6" podStartSLOduration=3.610270666 podStartE2EDuration="1m11.758850663s" podCreationTimestamp="2026-01-25 00:11:48 +0000 UTC" firstStartedPulling="2026-01-25 00:11:50.730790781 +0000 UTC m=+149.963781221" lastFinishedPulling="2026-01-25 00:12:58.879370778 +0000 UTC m=+218.112361218" observedRunningTime="2026-01-25 00:12:59.757362281 +0000 UTC m=+218.990352741" watchObservedRunningTime="2026-01-25 00:12:59.758850663 +0000 UTC m=+218.991841103" Jan 25 00:12:59 crc kubenswrapper[4947]: I0125 00:12:59.786764 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nmrkd" podStartSLOduration=3.693786688 podStartE2EDuration="1m11.786724972s" podCreationTimestamp="2026-01-25 00:11:48 +0000 UTC" firstStartedPulling="2026-01-25 00:11:50.717220924 +0000 UTC m=+149.950211354" lastFinishedPulling="2026-01-25 00:12:58.810159198 +0000 UTC m=+218.043149638" observedRunningTime="2026-01-25 00:12:59.784746428 +0000 UTC m=+219.017736898" watchObservedRunningTime="2026-01-25 00:12:59.786724972 +0000 UTC m=+219.019715412" Jan 25 00:13:00 crc kubenswrapper[4947]: I0125 00:13:00.360840 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wwwnp" Jan 25 00:13:00 crc kubenswrapper[4947]: I0125 00:13:00.360904 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wwwnp" Jan 25 00:13:00 crc kubenswrapper[4947]: I0125 00:13:00.427178 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wwwnp" Jan 25 00:13:00 crc kubenswrapper[4947]: I0125 00:13:00.678519 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2ckt7" Jan 25 00:13:00 crc kubenswrapper[4947]: I0125 00:13:00.782817 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wwwnp" Jan 25 00:13:01 crc kubenswrapper[4947]: I0125 00:13:01.145677 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ltw77" Jan 25 00:13:01 crc kubenswrapper[4947]: I0125 00:13:01.145779 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ltw77" Jan 25 00:13:01 crc kubenswrapper[4947]: I0125 00:13:01.200743 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ltw77" Jan 25 00:13:01 crc kubenswrapper[4947]: I0125 00:13:01.391113 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4w46p" Jan 25 00:13:01 crc kubenswrapper[4947]: I0125 00:13:01.391165 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4w46p" Jan 25 00:13:01 crc kubenswrapper[4947]: I0125 00:13:01.440925 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4w46p" Jan 25 00:13:01 crc kubenswrapper[4947]: I0125 00:13:01.789463 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ltw77" Jan 25 00:13:01 crc kubenswrapper[4947]: I0125 00:13:01.811926 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4w46p" Jan 25 00:13:03 crc kubenswrapper[4947]: I0125 00:13:03.822043 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ckt7"] Jan 25 00:13:03 crc kubenswrapper[4947]: I0125 00:13:03.822803 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2ckt7" podUID="47cb5005-6286-4d5c-b654-65009ac6d3d9" containerName="registry-server" containerID="cri-o://317126b2d6e4f76d49bc8f58d4a6611c35dbaae31512dbd0e9d9b571103fb945" gracePeriod=2 Jan 25 00:13:04 crc kubenswrapper[4947]: I0125 00:13:04.021907 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4w46p"] Jan 25 00:13:04 crc kubenswrapper[4947]: I0125 00:13:04.022324 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4w46p" podUID="57fceeaa-414d-4570-98fb-2b8a06a7d3bb" containerName="registry-server" containerID="cri-o://cda1828998251daaa0955fcd8ff8f0139c55af4883ef564ee55bdf29defa9273" gracePeriod=2 Jan 25 00:13:05 crc kubenswrapper[4947]: I0125 00:13:05.617705 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5687556c4c-8vd78"] Jan 25 00:13:05 crc kubenswrapper[4947]: I0125 00:13:05.617999 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" podUID="494402be-6a25-4b8d-a515-de9eba8f1d31" containerName="controller-manager" containerID="cri-o://1c028832216129582df30e9ae1e01392e885f0edd17b2831b6b48bd47fc1c5c0" gracePeriod=30 Jan 25 00:13:05 crc kubenswrapper[4947]: I0125 00:13:05.700490 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t"] Jan 25 00:13:05 crc kubenswrapper[4947]: I0125 00:13:05.700855 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" podUID="d1842ab3-9eb3-4aa3-b77f-ee74e120fe47" containerName="route-controller-manager" containerID="cri-o://6c1d7cc945b820b218fa75e06214182e9d3f500a0874845bdee9db322e9cef77" gracePeriod=30 Jan 25 00:13:05 crc kubenswrapper[4947]: I0125 00:13:05.764713 4947 generic.go:334] "Generic (PLEG): container finished" podID="57fceeaa-414d-4570-98fb-2b8a06a7d3bb" containerID="cda1828998251daaa0955fcd8ff8f0139c55af4883ef564ee55bdf29defa9273" exitCode=0 Jan 25 00:13:05 crc kubenswrapper[4947]: I0125 00:13:05.764793 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4w46p" event={"ID":"57fceeaa-414d-4570-98fb-2b8a06a7d3bb","Type":"ContainerDied","Data":"cda1828998251daaa0955fcd8ff8f0139c55af4883ef564ee55bdf29defa9273"} Jan 25 00:13:05 crc kubenswrapper[4947]: I0125 00:13:05.767243 4947 generic.go:334] "Generic (PLEG): container finished" podID="47cb5005-6286-4d5c-b654-65009ac6d3d9" containerID="317126b2d6e4f76d49bc8f58d4a6611c35dbaae31512dbd0e9d9b571103fb945" exitCode=0 Jan 25 00:13:05 crc kubenswrapper[4947]: I0125 00:13:05.767289 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ckt7" event={"ID":"47cb5005-6286-4d5c-b654-65009ac6d3d9","Type":"ContainerDied","Data":"317126b2d6e4f76d49bc8f58d4a6611c35dbaae31512dbd0e9d9b571103fb945"} Jan 25 00:13:07 crc kubenswrapper[4947]: I0125 00:13:07.783000 4947 generic.go:334] "Generic (PLEG): container finished" podID="494402be-6a25-4b8d-a515-de9eba8f1d31" containerID="1c028832216129582df30e9ae1e01392e885f0edd17b2831b6b48bd47fc1c5c0" exitCode=0 Jan 25 00:13:07 crc kubenswrapper[4947]: I0125 00:13:07.783101 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" event={"ID":"494402be-6a25-4b8d-a515-de9eba8f1d31","Type":"ContainerDied","Data":"1c028832216129582df30e9ae1e01392e885f0edd17b2831b6b48bd47fc1c5c0"} Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.076731 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2ckt7" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.206534 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47cb5005-6286-4d5c-b654-65009ac6d3d9-utilities\") pod \"47cb5005-6286-4d5c-b654-65009ac6d3d9\" (UID: \"47cb5005-6286-4d5c-b654-65009ac6d3d9\") " Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.206668 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xj6xh\" (UniqueName: \"kubernetes.io/projected/47cb5005-6286-4d5c-b654-65009ac6d3d9-kube-api-access-xj6xh\") pod \"47cb5005-6286-4d5c-b654-65009ac6d3d9\" (UID: \"47cb5005-6286-4d5c-b654-65009ac6d3d9\") " Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.206737 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47cb5005-6286-4d5c-b654-65009ac6d3d9-catalog-content\") pod \"47cb5005-6286-4d5c-b654-65009ac6d3d9\" (UID: \"47cb5005-6286-4d5c-b654-65009ac6d3d9\") " Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.208554 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47cb5005-6286-4d5c-b654-65009ac6d3d9-utilities" (OuterVolumeSpecName: "utilities") pod "47cb5005-6286-4d5c-b654-65009ac6d3d9" (UID: "47cb5005-6286-4d5c-b654-65009ac6d3d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.215468 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47cb5005-6286-4d5c-b654-65009ac6d3d9-kube-api-access-xj6xh" (OuterVolumeSpecName: "kube-api-access-xj6xh") pod "47cb5005-6286-4d5c-b654-65009ac6d3d9" (UID: "47cb5005-6286-4d5c-b654-65009ac6d3d9"). InnerVolumeSpecName "kube-api-access-xj6xh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.230065 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47cb5005-6286-4d5c-b654-65009ac6d3d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "47cb5005-6286-4d5c-b654-65009ac6d3d9" (UID: "47cb5005-6286-4d5c-b654-65009ac6d3d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.308104 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47cb5005-6286-4d5c-b654-65009ac6d3d9-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.308152 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xj6xh\" (UniqueName: \"kubernetes.io/projected/47cb5005-6286-4d5c-b654-65009ac6d3d9-kube-api-access-xj6xh\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.308164 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47cb5005-6286-4d5c-b654-65009ac6d3d9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.410083 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-47m2l" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.410159 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-47m2l" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.413252 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nmrkd" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.415411 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nmrkd" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.468986 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-47m2l" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.469892 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nmrkd" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.592299 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4vzx6" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.593224 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4vzx6" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.611699 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4w46p" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.653622 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4vzx6" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.719898 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57fceeaa-414d-4570-98fb-2b8a06a7d3bb-catalog-content\") pod \"57fceeaa-414d-4570-98fb-2b8a06a7d3bb\" (UID: \"57fceeaa-414d-4570-98fb-2b8a06a7d3bb\") " Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.720614 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57fceeaa-414d-4570-98fb-2b8a06a7d3bb-utilities\") pod \"57fceeaa-414d-4570-98fb-2b8a06a7d3bb\" (UID: \"57fceeaa-414d-4570-98fb-2b8a06a7d3bb\") " Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.720827 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fvpv\" (UniqueName: \"kubernetes.io/projected/57fceeaa-414d-4570-98fb-2b8a06a7d3bb-kube-api-access-9fvpv\") pod \"57fceeaa-414d-4570-98fb-2b8a06a7d3bb\" (UID: \"57fceeaa-414d-4570-98fb-2b8a06a7d3bb\") " Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.721641 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57fceeaa-414d-4570-98fb-2b8a06a7d3bb-utilities" (OuterVolumeSpecName: "utilities") pod "57fceeaa-414d-4570-98fb-2b8a06a7d3bb" (UID: "57fceeaa-414d-4570-98fb-2b8a06a7d3bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.728963 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57fceeaa-414d-4570-98fb-2b8a06a7d3bb-kube-api-access-9fvpv" (OuterVolumeSpecName: "kube-api-access-9fvpv") pod "57fceeaa-414d-4570-98fb-2b8a06a7d3bb" (UID: "57fceeaa-414d-4570-98fb-2b8a06a7d3bb"). InnerVolumeSpecName "kube-api-access-9fvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.782262 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dmsjj"] Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.810760 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4w46p" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.810902 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4w46p" event={"ID":"57fceeaa-414d-4570-98fb-2b8a06a7d3bb","Type":"ContainerDied","Data":"6ff6fe5b753697e94b7c8e5ddcb3516739a88d3938d849311b89961974eb03c2"} Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.810974 4947 scope.go:117] "RemoveContainer" containerID="cda1828998251daaa0955fcd8ff8f0139c55af4883ef564ee55bdf29defa9273" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.822702 4947 generic.go:334] "Generic (PLEG): container finished" podID="d1842ab3-9eb3-4aa3-b77f-ee74e120fe47" containerID="6c1d7cc945b820b218fa75e06214182e9d3f500a0874845bdee9db322e9cef77" exitCode=0 Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.822977 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" event={"ID":"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47","Type":"ContainerDied","Data":"6c1d7cc945b820b218fa75e06214182e9d3f500a0874845bdee9db322e9cef77"} Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.823667 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57fceeaa-414d-4570-98fb-2b8a06a7d3bb-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.823695 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fvpv\" (UniqueName: \"kubernetes.io/projected/57fceeaa-414d-4570-98fb-2b8a06a7d3bb-kube-api-access-9fvpv\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.834947 4947 scope.go:117] "RemoveContainer" containerID="7d1970bbd2b42fe7877661c52dbc7fbf441aa3e65a5ef8dfe46f639caa2e9c08" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.854469 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2ckt7" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.856223 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ckt7" event={"ID":"47cb5005-6286-4d5c-b654-65009ac6d3d9","Type":"ContainerDied","Data":"401542bfadee8c47bb521dc0eb21357c2ec3d46cc246c1c4b0d9b2d89d6fbbe2"} Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.881339 4947 scope.go:117] "RemoveContainer" containerID="5b659f6a3287d4d66a77d57b5ec03b9728a52bc6ef42a979eeaaf06156f4c4a0" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.910693 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ckt7"] Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.915524 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4vzx6" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.920542 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ckt7"] Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.929578 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nmrkd" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.940792 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-47m2l" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.941169 4947 scope.go:117] "RemoveContainer" containerID="317126b2d6e4f76d49bc8f58d4a6611c35dbaae31512dbd0e9d9b571103fb945" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.957757 4947 scope.go:117] "RemoveContainer" containerID="f0e8d23d69c130c02800100eadb68b2223ac406b1ec604b6d679944a469596fc" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.015733 4947 scope.go:117] "RemoveContainer" containerID="2f94cab6a2a710126f5b6870bb3f746028f1b033d9975bad0560d251170fc46f" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.101076 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47cb5005-6286-4d5c-b654-65009ac6d3d9" path="/var/lib/kubelet/pods/47cb5005-6286-4d5c-b654-65009ac6d3d9/volumes" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.546726 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.652671 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-client-ca\") pod \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\" (UID: \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\") " Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.652750 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-serving-cert\") pod \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\" (UID: \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\") " Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.653226 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-client-ca" (OuterVolumeSpecName: "client-ca") pod "d1842ab3-9eb3-4aa3-b77f-ee74e120fe47" (UID: "d1842ab3-9eb3-4aa3-b77f-ee74e120fe47"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.653772 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-config\") pod \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\" (UID: \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\") " Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.653846 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nrzk\" (UniqueName: \"kubernetes.io/projected/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-kube-api-access-8nrzk\") pod \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\" (UID: \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\") " Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.654066 4947 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-client-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.654403 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-config" (OuterVolumeSpecName: "config") pod "d1842ab3-9eb3-4aa3-b77f-ee74e120fe47" (UID: "d1842ab3-9eb3-4aa3-b77f-ee74e120fe47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.657654 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d1842ab3-9eb3-4aa3-b77f-ee74e120fe47" (UID: "d1842ab3-9eb3-4aa3-b77f-ee74e120fe47"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.658068 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-kube-api-access-8nrzk" (OuterVolumeSpecName: "kube-api-access-8nrzk") pod "d1842ab3-9eb3-4aa3-b77f-ee74e120fe47" (UID: "d1842ab3-9eb3-4aa3-b77f-ee74e120fe47"). InnerVolumeSpecName "kube-api-access-8nrzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.687205 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.755454 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6fvp\" (UniqueName: \"kubernetes.io/projected/494402be-6a25-4b8d-a515-de9eba8f1d31-kube-api-access-w6fvp\") pod \"494402be-6a25-4b8d-a515-de9eba8f1d31\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.755541 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/494402be-6a25-4b8d-a515-de9eba8f1d31-serving-cert\") pod \"494402be-6a25-4b8d-a515-de9eba8f1d31\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.755575 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/494402be-6a25-4b8d-a515-de9eba8f1d31-proxy-ca-bundles\") pod \"494402be-6a25-4b8d-a515-de9eba8f1d31\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.757334 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/494402be-6a25-4b8d-a515-de9eba8f1d31-config\") pod \"494402be-6a25-4b8d-a515-de9eba8f1d31\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.757369 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/494402be-6a25-4b8d-a515-de9eba8f1d31-client-ca\") pod \"494402be-6a25-4b8d-a515-de9eba8f1d31\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.757778 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nrzk\" (UniqueName: \"kubernetes.io/projected/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-kube-api-access-8nrzk\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.757800 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.757811 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.756648 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/494402be-6a25-4b8d-a515-de9eba8f1d31-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "494402be-6a25-4b8d-a515-de9eba8f1d31" (UID: "494402be-6a25-4b8d-a515-de9eba8f1d31"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.758281 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/494402be-6a25-4b8d-a515-de9eba8f1d31-client-ca" (OuterVolumeSpecName: "client-ca") pod "494402be-6a25-4b8d-a515-de9eba8f1d31" (UID: "494402be-6a25-4b8d-a515-de9eba8f1d31"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.758668 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/494402be-6a25-4b8d-a515-de9eba8f1d31-config" (OuterVolumeSpecName: "config") pod "494402be-6a25-4b8d-a515-de9eba8f1d31" (UID: "494402be-6a25-4b8d-a515-de9eba8f1d31"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.760767 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/494402be-6a25-4b8d-a515-de9eba8f1d31-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "494402be-6a25-4b8d-a515-de9eba8f1d31" (UID: "494402be-6a25-4b8d-a515-de9eba8f1d31"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.760889 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/494402be-6a25-4b8d-a515-de9eba8f1d31-kube-api-access-w6fvp" (OuterVolumeSpecName: "kube-api-access-w6fvp") pod "494402be-6a25-4b8d-a515-de9eba8f1d31" (UID: "494402be-6a25-4b8d-a515-de9eba8f1d31"). InnerVolumeSpecName "kube-api-access-w6fvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.859844 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/494402be-6a25-4b8d-a515-de9eba8f1d31-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.859895 4947 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/494402be-6a25-4b8d-a515-de9eba8f1d31-client-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.859912 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6fvp\" (UniqueName: \"kubernetes.io/projected/494402be-6a25-4b8d-a515-de9eba8f1d31-kube-api-access-w6fvp\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.859939 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/494402be-6a25-4b8d-a515-de9eba8f1d31-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.859950 4947 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/494402be-6a25-4b8d-a515-de9eba8f1d31-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.862427 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.862582 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" event={"ID":"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47","Type":"ContainerDied","Data":"7fa469b1f93c6d5d71121e93f8cc7b379adfec6c3fbf576d48d60d1ba1c8315e"} Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.862675 4947 scope.go:117] "RemoveContainer" containerID="6c1d7cc945b820b218fa75e06214182e9d3f500a0874845bdee9db322e9cef77" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.869736 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.869714 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" event={"ID":"494402be-6a25-4b8d-a515-de9eba8f1d31","Type":"ContainerDied","Data":"4c061a35ba34d1fb75aea276081e1c5742b11fc541423a49284514c05ed48d3b"} Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.899604 4947 scope.go:117] "RemoveContainer" containerID="1c028832216129582df30e9ae1e01392e885f0edd17b2831b6b48bd47fc1c5c0" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.918166 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5687556c4c-8vd78"] Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.922096 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5687556c4c-8vd78"] Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.937821 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t"] Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.940433 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t"] Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.213949 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-bf7b4c587-6xl8t"] Jan 25 00:13:10 crc kubenswrapper[4947]: E0125 00:13:10.224370 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47cb5005-6286-4d5c-b654-65009ac6d3d9" containerName="registry-server" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.224416 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="47cb5005-6286-4d5c-b654-65009ac6d3d9" containerName="registry-server" Jan 25 00:13:10 crc kubenswrapper[4947]: E0125 00:13:10.224441 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57fceeaa-414d-4570-98fb-2b8a06a7d3bb" containerName="extract-content" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.224451 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="57fceeaa-414d-4570-98fb-2b8a06a7d3bb" containerName="extract-content" Jan 25 00:13:10 crc kubenswrapper[4947]: E0125 00:13:10.224465 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57fceeaa-414d-4570-98fb-2b8a06a7d3bb" containerName="extract-utilities" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.224475 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="57fceeaa-414d-4570-98fb-2b8a06a7d3bb" containerName="extract-utilities" Jan 25 00:13:10 crc kubenswrapper[4947]: E0125 00:13:10.224491 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e4f53a6-fcc3-4310-965d-9a5dda91080b" containerName="image-pruner" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.224500 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e4f53a6-fcc3-4310-965d-9a5dda91080b" containerName="image-pruner" Jan 25 00:13:10 crc kubenswrapper[4947]: E0125 00:13:10.224511 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47cb5005-6286-4d5c-b654-65009ac6d3d9" containerName="extract-utilities" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.224520 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="47cb5005-6286-4d5c-b654-65009ac6d3d9" containerName="extract-utilities" Jan 25 00:13:10 crc kubenswrapper[4947]: E0125 00:13:10.224537 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47cb5005-6286-4d5c-b654-65009ac6d3d9" containerName="extract-content" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.224546 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="47cb5005-6286-4d5c-b654-65009ac6d3d9" containerName="extract-content" Jan 25 00:13:10 crc kubenswrapper[4947]: E0125 00:13:10.224561 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94d05abe-f768-43d7-abf4-0a7a4e36c37e" containerName="pruner" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.224569 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="94d05abe-f768-43d7-abf4-0a7a4e36c37e" containerName="pruner" Jan 25 00:13:10 crc kubenswrapper[4947]: E0125 00:13:10.224586 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="494402be-6a25-4b8d-a515-de9eba8f1d31" containerName="controller-manager" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.224595 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="494402be-6a25-4b8d-a515-de9eba8f1d31" containerName="controller-manager" Jan 25 00:13:10 crc kubenswrapper[4947]: E0125 00:13:10.224607 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1842ab3-9eb3-4aa3-b77f-ee74e120fe47" containerName="route-controller-manager" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.224616 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1842ab3-9eb3-4aa3-b77f-ee74e120fe47" containerName="route-controller-manager" Jan 25 00:13:10 crc kubenswrapper[4947]: E0125 00:13:10.224625 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57fceeaa-414d-4570-98fb-2b8a06a7d3bb" containerName="registry-server" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.224633 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="57fceeaa-414d-4570-98fb-2b8a06a7d3bb" containerName="registry-server" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.225010 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="57fceeaa-414d-4570-98fb-2b8a06a7d3bb" containerName="registry-server" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.225026 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="94d05abe-f768-43d7-abf4-0a7a4e36c37e" containerName="pruner" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.225042 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="494402be-6a25-4b8d-a515-de9eba8f1d31" containerName="controller-manager" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.225051 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1842ab3-9eb3-4aa3-b77f-ee74e120fe47" containerName="route-controller-manager" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.225063 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e4f53a6-fcc3-4310-965d-9a5dda91080b" containerName="image-pruner" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.225078 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="47cb5005-6286-4d5c-b654-65009ac6d3d9" containerName="registry-server" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.225573 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7"] Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.226118 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bf7b4c587-6xl8t"] Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.226236 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.226748 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.231737 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.232221 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.232566 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.232728 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.237926 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.238259 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.238409 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.238559 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.238849 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.239016 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.239161 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.239387 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.249008 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.251586 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7"] Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.267400 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/312d7f0a-f809-4e61-9ddd-46a5328b297c-client-ca\") pod \"controller-manager-bf7b4c587-6xl8t\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.267451 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/507c2587-a4ff-48cd-8740-800f9e614c65-serving-cert\") pod \"route-controller-manager-785b84857-vwtg7\" (UID: \"507c2587-a4ff-48cd-8740-800f9e614c65\") " pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.267487 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/507c2587-a4ff-48cd-8740-800f9e614c65-config\") pod \"route-controller-manager-785b84857-vwtg7\" (UID: \"507c2587-a4ff-48cd-8740-800f9e614c65\") " pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.267811 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/312d7f0a-f809-4e61-9ddd-46a5328b297c-serving-cert\") pod \"controller-manager-bf7b4c587-6xl8t\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.267918 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312d7f0a-f809-4e61-9ddd-46a5328b297c-config\") pod \"controller-manager-bf7b4c587-6xl8t\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.268020 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w7j5\" (UniqueName: \"kubernetes.io/projected/312d7f0a-f809-4e61-9ddd-46a5328b297c-kube-api-access-9w7j5\") pod \"controller-manager-bf7b4c587-6xl8t\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.268190 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/507c2587-a4ff-48cd-8740-800f9e614c65-client-ca\") pod \"route-controller-manager-785b84857-vwtg7\" (UID: \"507c2587-a4ff-48cd-8740-800f9e614c65\") " pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.268317 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/312d7f0a-f809-4e61-9ddd-46a5328b297c-proxy-ca-bundles\") pod \"controller-manager-bf7b4c587-6xl8t\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.268410 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88fpz\" (UniqueName: \"kubernetes.io/projected/507c2587-a4ff-48cd-8740-800f9e614c65-kube-api-access-88fpz\") pod \"route-controller-manager-785b84857-vwtg7\" (UID: \"507c2587-a4ff-48cd-8740-800f9e614c65\") " pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.370350 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/312d7f0a-f809-4e61-9ddd-46a5328b297c-serving-cert\") pod \"controller-manager-bf7b4c587-6xl8t\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.370425 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312d7f0a-f809-4e61-9ddd-46a5328b297c-config\") pod \"controller-manager-bf7b4c587-6xl8t\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.370458 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w7j5\" (UniqueName: \"kubernetes.io/projected/312d7f0a-f809-4e61-9ddd-46a5328b297c-kube-api-access-9w7j5\") pod \"controller-manager-bf7b4c587-6xl8t\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.370504 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/507c2587-a4ff-48cd-8740-800f9e614c65-client-ca\") pod \"route-controller-manager-785b84857-vwtg7\" (UID: \"507c2587-a4ff-48cd-8740-800f9e614c65\") " pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.370528 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/312d7f0a-f809-4e61-9ddd-46a5328b297c-proxy-ca-bundles\") pod \"controller-manager-bf7b4c587-6xl8t\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.370555 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88fpz\" (UniqueName: \"kubernetes.io/projected/507c2587-a4ff-48cd-8740-800f9e614c65-kube-api-access-88fpz\") pod \"route-controller-manager-785b84857-vwtg7\" (UID: \"507c2587-a4ff-48cd-8740-800f9e614c65\") " pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.370582 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/312d7f0a-f809-4e61-9ddd-46a5328b297c-client-ca\") pod \"controller-manager-bf7b4c587-6xl8t\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.370608 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/507c2587-a4ff-48cd-8740-800f9e614c65-serving-cert\") pod \"route-controller-manager-785b84857-vwtg7\" (UID: \"507c2587-a4ff-48cd-8740-800f9e614c65\") " pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.370634 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/507c2587-a4ff-48cd-8740-800f9e614c65-config\") pod \"route-controller-manager-785b84857-vwtg7\" (UID: \"507c2587-a4ff-48cd-8740-800f9e614c65\") " pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.372281 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/312d7f0a-f809-4e61-9ddd-46a5328b297c-client-ca\") pod \"controller-manager-bf7b4c587-6xl8t\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.372400 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/312d7f0a-f809-4e61-9ddd-46a5328b297c-proxy-ca-bundles\") pod \"controller-manager-bf7b4c587-6xl8t\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.372950 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/507c2587-a4ff-48cd-8740-800f9e614c65-client-ca\") pod \"route-controller-manager-785b84857-vwtg7\" (UID: \"507c2587-a4ff-48cd-8740-800f9e614c65\") " pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.374253 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312d7f0a-f809-4e61-9ddd-46a5328b297c-config\") pod \"controller-manager-bf7b4c587-6xl8t\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.378333 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/312d7f0a-f809-4e61-9ddd-46a5328b297c-serving-cert\") pod \"controller-manager-bf7b4c587-6xl8t\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.378609 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/507c2587-a4ff-48cd-8740-800f9e614c65-serving-cert\") pod \"route-controller-manager-785b84857-vwtg7\" (UID: \"507c2587-a4ff-48cd-8740-800f9e614c65\") " pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.393753 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w7j5\" (UniqueName: \"kubernetes.io/projected/312d7f0a-f809-4e61-9ddd-46a5328b297c-kube-api-access-9w7j5\") pod \"controller-manager-bf7b4c587-6xl8t\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.396860 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88fpz\" (UniqueName: \"kubernetes.io/projected/507c2587-a4ff-48cd-8740-800f9e614c65-kube-api-access-88fpz\") pod \"route-controller-manager-785b84857-vwtg7\" (UID: \"507c2587-a4ff-48cd-8740-800f9e614c65\") " pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.587554 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.824395 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/507c2587-a4ff-48cd-8740-800f9e614c65-config\") pod \"route-controller-manager-785b84857-vwtg7\" (UID: \"507c2587-a4ff-48cd-8740-800f9e614c65\") " pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.876274 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" Jan 25 00:13:11 crc kubenswrapper[4947]: I0125 00:13:11.012846 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bf7b4c587-6xl8t"] Jan 25 00:13:11 crc kubenswrapper[4947]: W0125 00:13:11.029963 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod312d7f0a_f809_4e61_9ddd_46a5328b297c.slice/crio-d19e2f3886646df15934b672a149e21934ab3e0f233ab59f1b6c04edeba54de9 WatchSource:0}: Error finding container d19e2f3886646df15934b672a149e21934ab3e0f233ab59f1b6c04edeba54de9: Status 404 returned error can't find the container with id d19e2f3886646df15934b672a149e21934ab3e0f233ab59f1b6c04edeba54de9 Jan 25 00:13:11 crc kubenswrapper[4947]: I0125 00:13:11.097725 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="494402be-6a25-4b8d-a515-de9eba8f1d31" path="/var/lib/kubelet/pods/494402be-6a25-4b8d-a515-de9eba8f1d31/volumes" Jan 25 00:13:11 crc kubenswrapper[4947]: I0125 00:13:11.098801 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1842ab3-9eb3-4aa3-b77f-ee74e120fe47" path="/var/lib/kubelet/pods/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47/volumes" Jan 25 00:13:11 crc kubenswrapper[4947]: I0125 00:13:11.337648 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7"] Jan 25 00:13:11 crc kubenswrapper[4947]: W0125 00:13:11.355816 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod507c2587_a4ff_48cd_8740_800f9e614c65.slice/crio-e3712692a024a59d8c1f6eeedf100b94d93a2bc93180129c8667faed18062f64 WatchSource:0}: Error finding container e3712692a024a59d8c1f6eeedf100b94d93a2bc93180129c8667faed18062f64: Status 404 returned error can't find the container with id e3712692a024a59d8c1f6eeedf100b94d93a2bc93180129c8667faed18062f64 Jan 25 00:13:11 crc kubenswrapper[4947]: I0125 00:13:11.450387 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57fceeaa-414d-4570-98fb-2b8a06a7d3bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57fceeaa-414d-4570-98fb-2b8a06a7d3bb" (UID: "57fceeaa-414d-4570-98fb-2b8a06a7d3bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:13:11 crc kubenswrapper[4947]: I0125 00:13:11.489398 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57fceeaa-414d-4570-98fb-2b8a06a7d3bb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:11 crc kubenswrapper[4947]: I0125 00:13:11.563433 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4w46p"] Jan 25 00:13:11 crc kubenswrapper[4947]: I0125 00:13:11.573584 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4w46p"] Jan 25 00:13:11 crc kubenswrapper[4947]: I0125 00:13:11.888627 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" event={"ID":"507c2587-a4ff-48cd-8740-800f9e614c65","Type":"ContainerStarted","Data":"767798f91b016414bc55cbaa8ecb99f3eaea02413cb909d3fa8466c832339705"} Jan 25 00:13:11 crc kubenswrapper[4947]: I0125 00:13:11.888686 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" Jan 25 00:13:11 crc kubenswrapper[4947]: I0125 00:13:11.888700 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" event={"ID":"507c2587-a4ff-48cd-8740-800f9e614c65","Type":"ContainerStarted","Data":"e3712692a024a59d8c1f6eeedf100b94d93a2bc93180129c8667faed18062f64"} Jan 25 00:13:11 crc kubenswrapper[4947]: I0125 00:13:11.893477 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" event={"ID":"312d7f0a-f809-4e61-9ddd-46a5328b297c","Type":"ContainerStarted","Data":"16bf48c77027103a40b98bc529ff47a76d184e9a4ff844a9962a12f32fb22999"} Jan 25 00:13:11 crc kubenswrapper[4947]: I0125 00:13:11.893546 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" event={"ID":"312d7f0a-f809-4e61-9ddd-46a5328b297c","Type":"ContainerStarted","Data":"d19e2f3886646df15934b672a149e21934ab3e0f233ab59f1b6c04edeba54de9"} Jan 25 00:13:11 crc kubenswrapper[4947]: I0125 00:13:11.893729 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:13:11 crc kubenswrapper[4947]: I0125 00:13:11.899981 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:13:11 crc kubenswrapper[4947]: I0125 00:13:11.918064 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" podStartSLOduration=6.91803713 podStartE2EDuration="6.91803713s" podCreationTimestamp="2026-01-25 00:13:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:13:11.914270636 +0000 UTC m=+231.147261076" watchObservedRunningTime="2026-01-25 00:13:11.91803713 +0000 UTC m=+231.151027570" Jan 25 00:13:11 crc kubenswrapper[4947]: I0125 00:13:11.924367 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" Jan 25 00:13:11 crc kubenswrapper[4947]: I0125 00:13:11.936208 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" podStartSLOduration=6.936179481 podStartE2EDuration="6.936179481s" podCreationTimestamp="2026-01-25 00:13:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:13:11.932400327 +0000 UTC m=+231.165390757" watchObservedRunningTime="2026-01-25 00:13:11.936179481 +0000 UTC m=+231.169169921" Jan 25 00:13:12 crc kubenswrapper[4947]: I0125 00:13:12.616334 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nmrkd"] Jan 25 00:13:12 crc kubenswrapper[4947]: I0125 00:13:12.616698 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nmrkd" podUID="8631ec11-9ab2-4799-b57c-0a346ec69767" containerName="registry-server" containerID="cri-o://3479ef8ea86ed5aed24831846b55d2a592235e2a9ef11acc31185ab9e1a53686" gracePeriod=2 Jan 25 00:13:12 crc kubenswrapper[4947]: I0125 00:13:12.818633 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4vzx6"] Jan 25 00:13:12 crc kubenswrapper[4947]: I0125 00:13:12.819072 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4vzx6" podUID="4fbe2fc7-f0a5-439c-988c-d034d3da6add" containerName="registry-server" containerID="cri-o://22c7ad1b6fc7e1b573a4edb484a50e425db99597935d2e89206ac4738f31db14" gracePeriod=2 Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.099195 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57fceeaa-414d-4570-98fb-2b8a06a7d3bb" path="/var/lib/kubelet/pods/57fceeaa-414d-4570-98fb-2b8a06a7d3bb/volumes" Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.821327 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmrkd" Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.827885 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4vzx6" Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.921280 4947 generic.go:334] "Generic (PLEG): container finished" podID="8631ec11-9ab2-4799-b57c-0a346ec69767" containerID="3479ef8ea86ed5aed24831846b55d2a592235e2a9ef11acc31185ab9e1a53686" exitCode=0 Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.921340 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmrkd" event={"ID":"8631ec11-9ab2-4799-b57c-0a346ec69767","Type":"ContainerDied","Data":"3479ef8ea86ed5aed24831846b55d2a592235e2a9ef11acc31185ab9e1a53686"} Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.921378 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmrkd" Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.921408 4947 scope.go:117] "RemoveContainer" containerID="3479ef8ea86ed5aed24831846b55d2a592235e2a9ef11acc31185ab9e1a53686" Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.921394 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmrkd" event={"ID":"8631ec11-9ab2-4799-b57c-0a346ec69767","Type":"ContainerDied","Data":"7d6ebf3601605e6c873a327cc838407e459ee58147699177b0740d46b1d7aedf"} Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.924382 4947 generic.go:334] "Generic (PLEG): container finished" podID="4fbe2fc7-f0a5-439c-988c-d034d3da6add" containerID="22c7ad1b6fc7e1b573a4edb484a50e425db99597935d2e89206ac4738f31db14" exitCode=0 Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.924468 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4vzx6" Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.924501 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4vzx6" event={"ID":"4fbe2fc7-f0a5-439c-988c-d034d3da6add","Type":"ContainerDied","Data":"22c7ad1b6fc7e1b573a4edb484a50e425db99597935d2e89206ac4738f31db14"} Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.924537 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4vzx6" event={"ID":"4fbe2fc7-f0a5-439c-988c-d034d3da6add","Type":"ContainerDied","Data":"e4fc08944b569f65f472ef5d6a0000744c15a40d1962fcdb333c93ea9560dbba"} Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.940227 4947 scope.go:117] "RemoveContainer" containerID="5fe512aead52ea5a24493594687649dde048b801b45e7de2674ff17753b04147" Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.957940 4947 scope.go:117] "RemoveContainer" containerID="ffcce6eeb0e69e798c8e9df8d5c6434f6e64194dfff77e93d2a985b4528474b0" Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.979065 4947 scope.go:117] "RemoveContainer" containerID="3479ef8ea86ed5aed24831846b55d2a592235e2a9ef11acc31185ab9e1a53686" Jan 25 00:13:13 crc kubenswrapper[4947]: E0125 00:13:13.979608 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3479ef8ea86ed5aed24831846b55d2a592235e2a9ef11acc31185ab9e1a53686\": container with ID starting with 3479ef8ea86ed5aed24831846b55d2a592235e2a9ef11acc31185ab9e1a53686 not found: ID does not exist" containerID="3479ef8ea86ed5aed24831846b55d2a592235e2a9ef11acc31185ab9e1a53686" Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.979641 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3479ef8ea86ed5aed24831846b55d2a592235e2a9ef11acc31185ab9e1a53686"} err="failed to get container status \"3479ef8ea86ed5aed24831846b55d2a592235e2a9ef11acc31185ab9e1a53686\": rpc error: code = NotFound desc = could not find container \"3479ef8ea86ed5aed24831846b55d2a592235e2a9ef11acc31185ab9e1a53686\": container with ID starting with 3479ef8ea86ed5aed24831846b55d2a592235e2a9ef11acc31185ab9e1a53686 not found: ID does not exist" Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.979670 4947 scope.go:117] "RemoveContainer" containerID="5fe512aead52ea5a24493594687649dde048b801b45e7de2674ff17753b04147" Jan 25 00:13:13 crc kubenswrapper[4947]: E0125 00:13:13.979981 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fe512aead52ea5a24493594687649dde048b801b45e7de2674ff17753b04147\": container with ID starting with 5fe512aead52ea5a24493594687649dde048b801b45e7de2674ff17753b04147 not found: ID does not exist" containerID="5fe512aead52ea5a24493594687649dde048b801b45e7de2674ff17753b04147" Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.980036 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fe512aead52ea5a24493594687649dde048b801b45e7de2674ff17753b04147"} err="failed to get container status \"5fe512aead52ea5a24493594687649dde048b801b45e7de2674ff17753b04147\": rpc error: code = NotFound desc = could not find container \"5fe512aead52ea5a24493594687649dde048b801b45e7de2674ff17753b04147\": container with ID starting with 5fe512aead52ea5a24493594687649dde048b801b45e7de2674ff17753b04147 not found: ID does not exist" Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.980073 4947 scope.go:117] "RemoveContainer" containerID="ffcce6eeb0e69e798c8e9df8d5c6434f6e64194dfff77e93d2a985b4528474b0" Jan 25 00:13:13 crc kubenswrapper[4947]: E0125 00:13:13.980365 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffcce6eeb0e69e798c8e9df8d5c6434f6e64194dfff77e93d2a985b4528474b0\": container with ID starting with ffcce6eeb0e69e798c8e9df8d5c6434f6e64194dfff77e93d2a985b4528474b0 not found: ID does not exist" containerID="ffcce6eeb0e69e798c8e9df8d5c6434f6e64194dfff77e93d2a985b4528474b0" Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.980388 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffcce6eeb0e69e798c8e9df8d5c6434f6e64194dfff77e93d2a985b4528474b0"} err="failed to get container status \"ffcce6eeb0e69e798c8e9df8d5c6434f6e64194dfff77e93d2a985b4528474b0\": rpc error: code = NotFound desc = could not find container \"ffcce6eeb0e69e798c8e9df8d5c6434f6e64194dfff77e93d2a985b4528474b0\": container with ID starting with ffcce6eeb0e69e798c8e9df8d5c6434f6e64194dfff77e93d2a985b4528474b0 not found: ID does not exist" Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.980400 4947 scope.go:117] "RemoveContainer" containerID="22c7ad1b6fc7e1b573a4edb484a50e425db99597935d2e89206ac4738f31db14" Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.997873 4947 scope.go:117] "RemoveContainer" containerID="df53c6d3951ba0b238c3365e425e80c0a7a77e349ffdbd43a64e089db4db9985" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.014065 4947 scope.go:117] "RemoveContainer" containerID="35a44afdfef807235aaca7df67ffffbacabda8facded58f94de250d11a4d2a25" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.021266 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8631ec11-9ab2-4799-b57c-0a346ec69767-utilities\") pod \"8631ec11-9ab2-4799-b57c-0a346ec69767\" (UID: \"8631ec11-9ab2-4799-b57c-0a346ec69767\") " Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.021332 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fbe2fc7-f0a5-439c-988c-d034d3da6add-catalog-content\") pod \"4fbe2fc7-f0a5-439c-988c-d034d3da6add\" (UID: \"4fbe2fc7-f0a5-439c-988c-d034d3da6add\") " Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.021377 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lszc2\" (UniqueName: \"kubernetes.io/projected/4fbe2fc7-f0a5-439c-988c-d034d3da6add-kube-api-access-lszc2\") pod \"4fbe2fc7-f0a5-439c-988c-d034d3da6add\" (UID: \"4fbe2fc7-f0a5-439c-988c-d034d3da6add\") " Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.021930 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fbe2fc7-f0a5-439c-988c-d034d3da6add-utilities\") pod \"4fbe2fc7-f0a5-439c-988c-d034d3da6add\" (UID: \"4fbe2fc7-f0a5-439c-988c-d034d3da6add\") " Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.021984 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfhzb\" (UniqueName: \"kubernetes.io/projected/8631ec11-9ab2-4799-b57c-0a346ec69767-kube-api-access-tfhzb\") pod \"8631ec11-9ab2-4799-b57c-0a346ec69767\" (UID: \"8631ec11-9ab2-4799-b57c-0a346ec69767\") " Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.022013 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8631ec11-9ab2-4799-b57c-0a346ec69767-catalog-content\") pod \"8631ec11-9ab2-4799-b57c-0a346ec69767\" (UID: \"8631ec11-9ab2-4799-b57c-0a346ec69767\") " Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.027438 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8631ec11-9ab2-4799-b57c-0a346ec69767-utilities" (OuterVolumeSpecName: "utilities") pod "8631ec11-9ab2-4799-b57c-0a346ec69767" (UID: "8631ec11-9ab2-4799-b57c-0a346ec69767"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.028700 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8631ec11-9ab2-4799-b57c-0a346ec69767-kube-api-access-tfhzb" (OuterVolumeSpecName: "kube-api-access-tfhzb") pod "8631ec11-9ab2-4799-b57c-0a346ec69767" (UID: "8631ec11-9ab2-4799-b57c-0a346ec69767"). InnerVolumeSpecName "kube-api-access-tfhzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.028853 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fbe2fc7-f0a5-439c-988c-d034d3da6add-kube-api-access-lszc2" (OuterVolumeSpecName: "kube-api-access-lszc2") pod "4fbe2fc7-f0a5-439c-988c-d034d3da6add" (UID: "4fbe2fc7-f0a5-439c-988c-d034d3da6add"). InnerVolumeSpecName "kube-api-access-lszc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.029303 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fbe2fc7-f0a5-439c-988c-d034d3da6add-utilities" (OuterVolumeSpecName: "utilities") pod "4fbe2fc7-f0a5-439c-988c-d034d3da6add" (UID: "4fbe2fc7-f0a5-439c-988c-d034d3da6add"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.057860 4947 scope.go:117] "RemoveContainer" containerID="22c7ad1b6fc7e1b573a4edb484a50e425db99597935d2e89206ac4738f31db14" Jan 25 00:13:14 crc kubenswrapper[4947]: E0125 00:13:14.058519 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22c7ad1b6fc7e1b573a4edb484a50e425db99597935d2e89206ac4738f31db14\": container with ID starting with 22c7ad1b6fc7e1b573a4edb484a50e425db99597935d2e89206ac4738f31db14 not found: ID does not exist" containerID="22c7ad1b6fc7e1b573a4edb484a50e425db99597935d2e89206ac4738f31db14" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.058573 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22c7ad1b6fc7e1b573a4edb484a50e425db99597935d2e89206ac4738f31db14"} err="failed to get container status \"22c7ad1b6fc7e1b573a4edb484a50e425db99597935d2e89206ac4738f31db14\": rpc error: code = NotFound desc = could not find container \"22c7ad1b6fc7e1b573a4edb484a50e425db99597935d2e89206ac4738f31db14\": container with ID starting with 22c7ad1b6fc7e1b573a4edb484a50e425db99597935d2e89206ac4738f31db14 not found: ID does not exist" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.058607 4947 scope.go:117] "RemoveContainer" containerID="df53c6d3951ba0b238c3365e425e80c0a7a77e349ffdbd43a64e089db4db9985" Jan 25 00:13:14 crc kubenswrapper[4947]: E0125 00:13:14.059720 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df53c6d3951ba0b238c3365e425e80c0a7a77e349ffdbd43a64e089db4db9985\": container with ID starting with df53c6d3951ba0b238c3365e425e80c0a7a77e349ffdbd43a64e089db4db9985 not found: ID does not exist" containerID="df53c6d3951ba0b238c3365e425e80c0a7a77e349ffdbd43a64e089db4db9985" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.059764 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df53c6d3951ba0b238c3365e425e80c0a7a77e349ffdbd43a64e089db4db9985"} err="failed to get container status \"df53c6d3951ba0b238c3365e425e80c0a7a77e349ffdbd43a64e089db4db9985\": rpc error: code = NotFound desc = could not find container \"df53c6d3951ba0b238c3365e425e80c0a7a77e349ffdbd43a64e089db4db9985\": container with ID starting with df53c6d3951ba0b238c3365e425e80c0a7a77e349ffdbd43a64e089db4db9985 not found: ID does not exist" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.059796 4947 scope.go:117] "RemoveContainer" containerID="35a44afdfef807235aaca7df67ffffbacabda8facded58f94de250d11a4d2a25" Jan 25 00:13:14 crc kubenswrapper[4947]: E0125 00:13:14.061485 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35a44afdfef807235aaca7df67ffffbacabda8facded58f94de250d11a4d2a25\": container with ID starting with 35a44afdfef807235aaca7df67ffffbacabda8facded58f94de250d11a4d2a25 not found: ID does not exist" containerID="35a44afdfef807235aaca7df67ffffbacabda8facded58f94de250d11a4d2a25" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.061512 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35a44afdfef807235aaca7df67ffffbacabda8facded58f94de250d11a4d2a25"} err="failed to get container status \"35a44afdfef807235aaca7df67ffffbacabda8facded58f94de250d11a4d2a25\": rpc error: code = NotFound desc = could not find container \"35a44afdfef807235aaca7df67ffffbacabda8facded58f94de250d11a4d2a25\": container with ID starting with 35a44afdfef807235aaca7df67ffffbacabda8facded58f94de250d11a4d2a25 not found: ID does not exist" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.080496 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8631ec11-9ab2-4799-b57c-0a346ec69767-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8631ec11-9ab2-4799-b57c-0a346ec69767" (UID: "8631ec11-9ab2-4799-b57c-0a346ec69767"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.085109 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fbe2fc7-f0a5-439c-988c-d034d3da6add-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4fbe2fc7-f0a5-439c-988c-d034d3da6add" (UID: "4fbe2fc7-f0a5-439c-988c-d034d3da6add"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.123803 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fbe2fc7-f0a5-439c-988c-d034d3da6add-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.123839 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfhzb\" (UniqueName: \"kubernetes.io/projected/8631ec11-9ab2-4799-b57c-0a346ec69767-kube-api-access-tfhzb\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.123852 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8631ec11-9ab2-4799-b57c-0a346ec69767-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.123862 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8631ec11-9ab2-4799-b57c-0a346ec69767-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.123870 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fbe2fc7-f0a5-439c-988c-d034d3da6add-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.123879 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lszc2\" (UniqueName: \"kubernetes.io/projected/4fbe2fc7-f0a5-439c-988c-d034d3da6add-kube-api-access-lszc2\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.259979 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nmrkd"] Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.263320 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nmrkd"] Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.270632 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4vzx6"] Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.273869 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4vzx6"] Jan 25 00:13:15 crc kubenswrapper[4947]: I0125 00:13:15.100510 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fbe2fc7-f0a5-439c-988c-d034d3da6add" path="/var/lib/kubelet/pods/4fbe2fc7-f0a5-439c-988c-d034d3da6add/volumes" Jan 25 00:13:15 crc kubenswrapper[4947]: I0125 00:13:15.103449 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8631ec11-9ab2-4799-b57c-0a346ec69767" path="/var/lib/kubelet/pods/8631ec11-9ab2-4799-b57c-0a346ec69767/volumes" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.015057 4947 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 25 00:13:16 crc kubenswrapper[4947]: E0125 00:13:16.015695 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8631ec11-9ab2-4799-b57c-0a346ec69767" containerName="extract-content" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.015848 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8631ec11-9ab2-4799-b57c-0a346ec69767" containerName="extract-content" Jan 25 00:13:16 crc kubenswrapper[4947]: E0125 00:13:16.015887 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fbe2fc7-f0a5-439c-988c-d034d3da6add" containerName="extract-content" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.015901 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fbe2fc7-f0a5-439c-988c-d034d3da6add" containerName="extract-content" Jan 25 00:13:16 crc kubenswrapper[4947]: E0125 00:13:16.015932 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8631ec11-9ab2-4799-b57c-0a346ec69767" containerName="registry-server" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.015946 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8631ec11-9ab2-4799-b57c-0a346ec69767" containerName="registry-server" Jan 25 00:13:16 crc kubenswrapper[4947]: E0125 00:13:16.015963 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fbe2fc7-f0a5-439c-988c-d034d3da6add" containerName="extract-utilities" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.015995 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fbe2fc7-f0a5-439c-988c-d034d3da6add" containerName="extract-utilities" Jan 25 00:13:16 crc kubenswrapper[4947]: E0125 00:13:16.016030 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fbe2fc7-f0a5-439c-988c-d034d3da6add" containerName="registry-server" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.016048 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fbe2fc7-f0a5-439c-988c-d034d3da6add" containerName="registry-server" Jan 25 00:13:16 crc kubenswrapper[4947]: E0125 00:13:16.016081 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8631ec11-9ab2-4799-b57c-0a346ec69767" containerName="extract-utilities" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.016094 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8631ec11-9ab2-4799-b57c-0a346ec69767" containerName="extract-utilities" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.016565 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="8631ec11-9ab2-4799-b57c-0a346ec69767" containerName="registry-server" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.016610 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fbe2fc7-f0a5-439c-988c-d034d3da6add" containerName="registry-server" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.017430 4947 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.017690 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.017998 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42" gracePeriod=15 Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.018078 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59" gracePeriod=15 Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.018210 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c" gracePeriod=15 Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.018109 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278" gracePeriod=15 Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.018171 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2" gracePeriod=15 Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.018909 4947 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 25 00:13:16 crc kubenswrapper[4947]: E0125 00:13:16.019303 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.019328 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 25 00:13:16 crc kubenswrapper[4947]: E0125 00:13:16.019349 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.019361 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 25 00:13:16 crc kubenswrapper[4947]: E0125 00:13:16.019377 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.019391 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 25 00:13:16 crc kubenswrapper[4947]: E0125 00:13:16.019405 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.019417 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 25 00:13:16 crc kubenswrapper[4947]: E0125 00:13:16.019440 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.019453 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 25 00:13:16 crc kubenswrapper[4947]: E0125 00:13:16.019469 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.019481 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 25 00:13:16 crc kubenswrapper[4947]: E0125 00:13:16.019496 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.019508 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.019736 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.019765 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.019788 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.019805 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.019821 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.019838 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.068355 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.149487 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.149573 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.150079 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.150118 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.150149 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.150167 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.150227 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.150253 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.251297 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.251347 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.251373 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.251416 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.251450 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.251470 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.251473 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.251514 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.251482 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.251493 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.251575 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.251612 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.251654 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.251708 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.251738 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.251768 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.365324 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: W0125 00:13:16.391381 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-d7349a1a4be8bfef6353d3f7647578bb21403b6708364430653b30bd7c8f97d9 WatchSource:0}: Error finding container d7349a1a4be8bfef6353d3f7647578bb21403b6708364430653b30bd7c8f97d9: Status 404 returned error can't find the container with id d7349a1a4be8bfef6353d3f7647578bb21403b6708364430653b30bd7c8f97d9 Jan 25 00:13:16 crc kubenswrapper[4947]: E0125 00:13:16.402080 4947 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.163:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188dd0f8fa88b90c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-25 00:13:16.400204044 +0000 UTC m=+235.633194494,LastTimestamp:2026-01-25 00:13:16.400204044 +0000 UTC m=+235.633194494,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.947758 4947 generic.go:334] "Generic (PLEG): container finished" podID="3305e0ba-7064-415c-bbaa-bdc630d95e40" containerID="34c655a70626cb0470c8341f4426a959f7be73c9bd302d5c7f42a0999b60f186" exitCode=0 Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.947859 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3305e0ba-7064-415c-bbaa-bdc630d95e40","Type":"ContainerDied","Data":"34c655a70626cb0470c8341f4426a959f7be73c9bd302d5c7f42a0999b60f186"} Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.949843 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.950529 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"87af76d9cedf9765995fdba192251417d6cb96cc8dbaac5f8d89ebd77523cb24"} Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.950587 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"d7349a1a4be8bfef6353d3f7647578bb21403b6708364430653b30bd7c8f97d9"} Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.950630 4947 status_manager.go:851] "Failed to get status for pod" podUID="3305e0ba-7064-415c-bbaa-bdc630d95e40" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.951427 4947 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.952056 4947 status_manager.go:851] "Failed to get status for pod" podUID="3305e0ba-7064-415c-bbaa-bdc630d95e40" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.952514 4947 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.953307 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.953574 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.955698 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.956638 4947 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59" exitCode=0 Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.956670 4947 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c" exitCode=0 Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.956680 4947 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278" exitCode=0 Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.956688 4947 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2" exitCode=2 Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.956741 4947 scope.go:117] "RemoveContainer" containerID="86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587" Jan 25 00:13:17 crc kubenswrapper[4947]: I0125 00:13:17.968872 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.355841 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.356776 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.357383 4947 status_manager.go:851] "Failed to get status for pod" podUID="3305e0ba-7064-415c-bbaa-bdc630d95e40" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.485787 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3305e0ba-7064-415c-bbaa-bdc630d95e40-kube-api-access\") pod \"3305e0ba-7064-415c-bbaa-bdc630d95e40\" (UID: \"3305e0ba-7064-415c-bbaa-bdc630d95e40\") " Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.485871 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3305e0ba-7064-415c-bbaa-bdc630d95e40-var-lock\") pod \"3305e0ba-7064-415c-bbaa-bdc630d95e40\" (UID: \"3305e0ba-7064-415c-bbaa-bdc630d95e40\") " Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.485952 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3305e0ba-7064-415c-bbaa-bdc630d95e40-kubelet-dir\") pod \"3305e0ba-7064-415c-bbaa-bdc630d95e40\" (UID: \"3305e0ba-7064-415c-bbaa-bdc630d95e40\") " Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.486464 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3305e0ba-7064-415c-bbaa-bdc630d95e40-var-lock" (OuterVolumeSpecName: "var-lock") pod "3305e0ba-7064-415c-bbaa-bdc630d95e40" (UID: "3305e0ba-7064-415c-bbaa-bdc630d95e40"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.486624 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3305e0ba-7064-415c-bbaa-bdc630d95e40-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3305e0ba-7064-415c-bbaa-bdc630d95e40" (UID: "3305e0ba-7064-415c-bbaa-bdc630d95e40"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.488018 4947 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3305e0ba-7064-415c-bbaa-bdc630d95e40-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.488045 4947 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3305e0ba-7064-415c-bbaa-bdc630d95e40-var-lock\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.493754 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3305e0ba-7064-415c-bbaa-bdc630d95e40-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3305e0ba-7064-415c-bbaa-bdc630d95e40" (UID: "3305e0ba-7064-415c-bbaa-bdc630d95e40"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.497554 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.499530 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.500353 4947 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.501062 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.501581 4947 status_manager.go:851] "Failed to get status for pod" podUID="3305e0ba-7064-415c-bbaa-bdc630d95e40" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.589579 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3305e0ba-7064-415c-bbaa-bdc630d95e40-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:18 crc kubenswrapper[4947]: E0125 00:13:18.645501 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:13:18Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:13:18Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:13:18Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:13:18Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:18 crc kubenswrapper[4947]: E0125 00:13:18.646057 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:18 crc kubenswrapper[4947]: E0125 00:13:18.646631 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:18 crc kubenswrapper[4947]: E0125 00:13:18.647061 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:18 crc kubenswrapper[4947]: E0125 00:13:18.647552 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:18 crc kubenswrapper[4947]: E0125 00:13:18.647674 4947 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.690858 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.690992 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.690986 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.691072 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.691165 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.691193 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.691656 4947 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.691713 4947 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.691730 4947 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.981875 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3305e0ba-7064-415c-bbaa-bdc630d95e40","Type":"ContainerDied","Data":"c860d5f033f7c7c376e626eb2cc279e42901878e52850a69646c280979b6502c"} Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.982049 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c860d5f033f7c7c376e626eb2cc279e42901878e52850a69646c280979b6502c" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.982074 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.987735 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.989122 4947 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42" exitCode=0 Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.989289 4947 scope.go:117] "RemoveContainer" containerID="7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.989401 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.006220 4947 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.006795 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.007293 4947 status_manager.go:851] "Failed to get status for pod" podUID="3305e0ba-7064-415c-bbaa-bdc630d95e40" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.017411 4947 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.018607 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.019098 4947 status_manager.go:851] "Failed to get status for pod" podUID="3305e0ba-7064-415c-bbaa-bdc630d95e40" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.022261 4947 scope.go:117] "RemoveContainer" containerID="bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.044051 4947 scope.go:117] "RemoveContainer" containerID="872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.069759 4947 scope.go:117] "RemoveContainer" containerID="d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.090092 4947 scope.go:117] "RemoveContainer" containerID="6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.106519 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.124364 4947 scope.go:117] "RemoveContainer" containerID="050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.171294 4947 scope.go:117] "RemoveContainer" containerID="7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59" Jan 25 00:13:19 crc kubenswrapper[4947]: E0125 00:13:19.172435 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\": container with ID starting with 7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59 not found: ID does not exist" containerID="7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.172501 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59"} err="failed to get container status \"7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\": rpc error: code = NotFound desc = could not find container \"7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\": container with ID starting with 7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59 not found: ID does not exist" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.172532 4947 scope.go:117] "RemoveContainer" containerID="bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c" Jan 25 00:13:19 crc kubenswrapper[4947]: E0125 00:13:19.173211 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\": container with ID starting with bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c not found: ID does not exist" containerID="bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.173900 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c"} err="failed to get container status \"bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\": rpc error: code = NotFound desc = could not find container \"bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\": container with ID starting with bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c not found: ID does not exist" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.174073 4947 scope.go:117] "RemoveContainer" containerID="872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278" Jan 25 00:13:19 crc kubenswrapper[4947]: E0125 00:13:19.174585 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\": container with ID starting with 872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278 not found: ID does not exist" containerID="872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.174677 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278"} err="failed to get container status \"872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\": rpc error: code = NotFound desc = could not find container \"872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\": container with ID starting with 872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278 not found: ID does not exist" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.174729 4947 scope.go:117] "RemoveContainer" containerID="d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2" Jan 25 00:13:19 crc kubenswrapper[4947]: E0125 00:13:19.175322 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\": container with ID starting with d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2 not found: ID does not exist" containerID="d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.175385 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2"} err="failed to get container status \"d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\": rpc error: code = NotFound desc = could not find container \"d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\": container with ID starting with d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2 not found: ID does not exist" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.175426 4947 scope.go:117] "RemoveContainer" containerID="6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42" Jan 25 00:13:19 crc kubenswrapper[4947]: E0125 00:13:19.175840 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\": container with ID starting with 6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42 not found: ID does not exist" containerID="6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.175875 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42"} err="failed to get container status \"6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\": rpc error: code = NotFound desc = could not find container \"6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\": container with ID starting with 6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42 not found: ID does not exist" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.175892 4947 scope.go:117] "RemoveContainer" containerID="050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923" Jan 25 00:13:19 crc kubenswrapper[4947]: E0125 00:13:19.176693 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\": container with ID starting with 050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923 not found: ID does not exist" containerID="050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.176742 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923"} err="failed to get container status \"050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\": rpc error: code = NotFound desc = could not find container \"050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\": container with ID starting with 050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923 not found: ID does not exist" Jan 25 00:13:20 crc kubenswrapper[4947]: E0125 00:13:20.123729 4947 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.163:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" volumeName="registry-storage" Jan 25 00:13:21 crc kubenswrapper[4947]: I0125 00:13:21.096245 4947 status_manager.go:851] "Failed to get status for pod" podUID="3305e0ba-7064-415c-bbaa-bdc630d95e40" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:21 crc kubenswrapper[4947]: I0125 00:13:21.096813 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:24 crc kubenswrapper[4947]: E0125 00:13:24.289646 4947 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.163:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188dd0f8fa88b90c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-25 00:13:16.400204044 +0000 UTC m=+235.633194494,LastTimestamp:2026-01-25 00:13:16.400204044 +0000 UTC m=+235.633194494,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 25 00:13:24 crc kubenswrapper[4947]: E0125 00:13:24.593271 4947 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:24 crc kubenswrapper[4947]: E0125 00:13:24.594356 4947 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:24 crc kubenswrapper[4947]: E0125 00:13:24.594828 4947 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:24 crc kubenswrapper[4947]: E0125 00:13:24.595230 4947 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:24 crc kubenswrapper[4947]: E0125 00:13:24.595797 4947 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:24 crc kubenswrapper[4947]: I0125 00:13:24.596011 4947 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 25 00:13:24 crc kubenswrapper[4947]: E0125 00:13:24.596706 4947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="200ms" Jan 25 00:13:24 crc kubenswrapper[4947]: E0125 00:13:24.797921 4947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="400ms" Jan 25 00:13:25 crc kubenswrapper[4947]: E0125 00:13:25.199595 4947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="800ms" Jan 25 00:13:26 crc kubenswrapper[4947]: E0125 00:13:26.000305 4947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="1.6s" Jan 25 00:13:27 crc kubenswrapper[4947]: E0125 00:13:27.601611 4947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="3.2s" Jan 25 00:13:28 crc kubenswrapper[4947]: I0125 00:13:28.089401 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:28 crc kubenswrapper[4947]: I0125 00:13:28.090670 4947 status_manager.go:851] "Failed to get status for pod" podUID="3305e0ba-7064-415c-bbaa-bdc630d95e40" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:28 crc kubenswrapper[4947]: I0125 00:13:28.091316 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:28 crc kubenswrapper[4947]: I0125 00:13:28.104815 4947 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c534de12-4879-4815-adf1-b14e38021e2b" Jan 25 00:13:28 crc kubenswrapper[4947]: I0125 00:13:28.104846 4947 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c534de12-4879-4815-adf1-b14e38021e2b" Jan 25 00:13:28 crc kubenswrapper[4947]: E0125 00:13:28.105462 4947 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:28 crc kubenswrapper[4947]: I0125 00:13:28.106339 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:28 crc kubenswrapper[4947]: W0125 00:13:28.139297 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-9ff1025610756624468ac546412d5d6e85e7f5196883f821e55f451c3d3b2c67 WatchSource:0}: Error finding container 9ff1025610756624468ac546412d5d6e85e7f5196883f821e55f451c3d3b2c67: Status 404 returned error can't find the container with id 9ff1025610756624468ac546412d5d6e85e7f5196883f821e55f451c3d3b2c67 Jan 25 00:13:29 crc kubenswrapper[4947]: I0125 00:13:29.072898 4947 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="dfb019c9fcd82e8237372312156ac571e3f5b36d6e459883bdb86a596fe52237" exitCode=0 Jan 25 00:13:29 crc kubenswrapper[4947]: I0125 00:13:29.072951 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"dfb019c9fcd82e8237372312156ac571e3f5b36d6e459883bdb86a596fe52237"} Jan 25 00:13:29 crc kubenswrapper[4947]: I0125 00:13:29.072982 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9ff1025610756624468ac546412d5d6e85e7f5196883f821e55f451c3d3b2c67"} Jan 25 00:13:29 crc kubenswrapper[4947]: I0125 00:13:29.073985 4947 status_manager.go:851] "Failed to get status for pod" podUID="3305e0ba-7064-415c-bbaa-bdc630d95e40" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:29 crc kubenswrapper[4947]: I0125 00:13:29.074553 4947 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c534de12-4879-4815-adf1-b14e38021e2b" Jan 25 00:13:29 crc kubenswrapper[4947]: I0125 00:13:29.074591 4947 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c534de12-4879-4815-adf1-b14e38021e2b" Jan 25 00:13:29 crc kubenswrapper[4947]: I0125 00:13:29.074852 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:29 crc kubenswrapper[4947]: E0125 00:13:29.075173 4947 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:30 crc kubenswrapper[4947]: I0125 00:13:30.087296 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 25 00:13:30 crc kubenswrapper[4947]: I0125 00:13:30.087380 4947 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567" exitCode=1 Jan 25 00:13:30 crc kubenswrapper[4947]: I0125 00:13:30.087465 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567"} Jan 25 00:13:30 crc kubenswrapper[4947]: I0125 00:13:30.088184 4947 scope.go:117] "RemoveContainer" containerID="1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567" Jan 25 00:13:30 crc kubenswrapper[4947]: I0125 00:13:30.090051 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c1ba21ed850a24e4512b9980a2c4aaefe34c0b158660d58e36f78ba3d218751d"} Jan 25 00:13:31 crc kubenswrapper[4947]: I0125 00:13:31.113918 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 25 00:13:31 crc kubenswrapper[4947]: I0125 00:13:31.114347 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e4fec833f5a38deaeead8a4f9ba077b2a44f467be5217a53cffae2b4724a66a7"} Jan 25 00:13:31 crc kubenswrapper[4947]: I0125 00:13:31.133492 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"02b87fc3b28accbe7a0acf36d867789cc0865b40bd69e86e3c2f0a70bbcffa30"} Jan 25 00:13:31 crc kubenswrapper[4947]: I0125 00:13:31.133566 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0d2a1a8fefb5290b4c28a33ea0be3555b6b93c7669c004491ea9696c7742add0"} Jan 25 00:13:32 crc kubenswrapper[4947]: I0125 00:13:32.145880 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ee5793a080897cce8f43b1b563bed74f401771063b1a7cb9528535aa36cfcce5"} Jan 25 00:13:32 crc kubenswrapper[4947]: I0125 00:13:32.145926 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e49d4df16b5ad093711504292d719516cb304463fa12d438cfd29daa84d0fa89"} Jan 25 00:13:32 crc kubenswrapper[4947]: I0125 00:13:32.146078 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:32 crc kubenswrapper[4947]: I0125 00:13:32.146191 4947 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c534de12-4879-4815-adf1-b14e38021e2b" Jan 25 00:13:32 crc kubenswrapper[4947]: I0125 00:13:32.146220 4947 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c534de12-4879-4815-adf1-b14e38021e2b" Jan 25 00:13:33 crc kubenswrapper[4947]: I0125 00:13:33.106614 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:33 crc kubenswrapper[4947]: I0125 00:13:33.107061 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:33 crc kubenswrapper[4947]: I0125 00:13:33.112221 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:33 crc kubenswrapper[4947]: I0125 00:13:33.828039 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" podUID="d3a733c1-a1cf-42ef-a056-27185292354f" containerName="oauth-openshift" containerID="cri-o://e90e55f834d170a9f2751b73e12c28d7d7c3fcc4793df05b84ce36f32d19cba4" gracePeriod=15 Jan 25 00:13:34 crc kubenswrapper[4947]: I0125 00:13:34.160975 4947 generic.go:334] "Generic (PLEG): container finished" podID="d3a733c1-a1cf-42ef-a056-27185292354f" containerID="e90e55f834d170a9f2751b73e12c28d7d7c3fcc4793df05b84ce36f32d19cba4" exitCode=0 Jan 25 00:13:34 crc kubenswrapper[4947]: I0125 00:13:34.161038 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" event={"ID":"d3a733c1-a1cf-42ef-a056-27185292354f","Type":"ContainerDied","Data":"e90e55f834d170a9f2751b73e12c28d7d7c3fcc4793df05b84ce36f32d19cba4"} Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.103523 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.124356 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-session\") pod \"d3a733c1-a1cf-42ef-a056-27185292354f\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.124411 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-cliconfig\") pod \"d3a733c1-a1cf-42ef-a056-27185292354f\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.124435 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-ocp-branding-template\") pod \"d3a733c1-a1cf-42ef-a056-27185292354f\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.124461 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d3a733c1-a1cf-42ef-a056-27185292354f-audit-dir\") pod \"d3a733c1-a1cf-42ef-a056-27185292354f\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.124489 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-idp-0-file-data\") pod \"d3a733c1-a1cf-42ef-a056-27185292354f\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.124513 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-trusted-ca-bundle\") pod \"d3a733c1-a1cf-42ef-a056-27185292354f\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.124540 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-service-ca\") pod \"d3a733c1-a1cf-42ef-a056-27185292354f\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.124589 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-template-login\") pod \"d3a733c1-a1cf-42ef-a056-27185292354f\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.124635 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbbs4\" (UniqueName: \"kubernetes.io/projected/d3a733c1-a1cf-42ef-a056-27185292354f-kube-api-access-fbbs4\") pod \"d3a733c1-a1cf-42ef-a056-27185292354f\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.124691 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-router-certs\") pod \"d3a733c1-a1cf-42ef-a056-27185292354f\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.124725 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-audit-policies\") pod \"d3a733c1-a1cf-42ef-a056-27185292354f\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.124757 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-template-error\") pod \"d3a733c1-a1cf-42ef-a056-27185292354f\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.124796 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-template-provider-selection\") pod \"d3a733c1-a1cf-42ef-a056-27185292354f\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.124841 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-serving-cert\") pod \"d3a733c1-a1cf-42ef-a056-27185292354f\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.124882 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3a733c1-a1cf-42ef-a056-27185292354f-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "d3a733c1-a1cf-42ef-a056-27185292354f" (UID: "d3a733c1-a1cf-42ef-a056-27185292354f"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.125086 4947 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d3a733c1-a1cf-42ef-a056-27185292354f-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.125526 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "d3a733c1-a1cf-42ef-a056-27185292354f" (UID: "d3a733c1-a1cf-42ef-a056-27185292354f"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.125548 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "d3a733c1-a1cf-42ef-a056-27185292354f" (UID: "d3a733c1-a1cf-42ef-a056-27185292354f"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.125574 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "d3a733c1-a1cf-42ef-a056-27185292354f" (UID: "d3a733c1-a1cf-42ef-a056-27185292354f"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.126335 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "d3a733c1-a1cf-42ef-a056-27185292354f" (UID: "d3a733c1-a1cf-42ef-a056-27185292354f"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.130701 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "d3a733c1-a1cf-42ef-a056-27185292354f" (UID: "d3a733c1-a1cf-42ef-a056-27185292354f"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.131561 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "d3a733c1-a1cf-42ef-a056-27185292354f" (UID: "d3a733c1-a1cf-42ef-a056-27185292354f"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.133870 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3a733c1-a1cf-42ef-a056-27185292354f-kube-api-access-fbbs4" (OuterVolumeSpecName: "kube-api-access-fbbs4") pod "d3a733c1-a1cf-42ef-a056-27185292354f" (UID: "d3a733c1-a1cf-42ef-a056-27185292354f"). InnerVolumeSpecName "kube-api-access-fbbs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.134334 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "d3a733c1-a1cf-42ef-a056-27185292354f" (UID: "d3a733c1-a1cf-42ef-a056-27185292354f"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.137844 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "d3a733c1-a1cf-42ef-a056-27185292354f" (UID: "d3a733c1-a1cf-42ef-a056-27185292354f"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.138111 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "d3a733c1-a1cf-42ef-a056-27185292354f" (UID: "d3a733c1-a1cf-42ef-a056-27185292354f"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.138188 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "d3a733c1-a1cf-42ef-a056-27185292354f" (UID: "d3a733c1-a1cf-42ef-a056-27185292354f"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.138947 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "d3a733c1-a1cf-42ef-a056-27185292354f" (UID: "d3a733c1-a1cf-42ef-a056-27185292354f"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.139153 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "d3a733c1-a1cf-42ef-a056-27185292354f" (UID: "d3a733c1-a1cf-42ef-a056-27185292354f"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.172475 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" event={"ID":"d3a733c1-a1cf-42ef-a056-27185292354f","Type":"ContainerDied","Data":"9fddec1b4c50133c39595d5ae85373dbb93cca3db14bf1f44dacfede0073d88d"} Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.172514 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.172572 4947 scope.go:117] "RemoveContainer" containerID="e90e55f834d170a9f2751b73e12c28d7d7c3fcc4793df05b84ce36f32d19cba4" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.226921 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.226989 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.227048 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.227058 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.227068 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.227078 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.227088 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.227099 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.227116 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.227142 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbbs4\" (UniqueName: \"kubernetes.io/projected/d3a733c1-a1cf-42ef-a056-27185292354f-kube-api-access-fbbs4\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.227154 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.227163 4947 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.227174 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:36 crc kubenswrapper[4947]: E0125 00:13:36.293617 4947 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError" Jan 25 00:13:37 crc kubenswrapper[4947]: I0125 00:13:37.166086 4947 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:37 crc kubenswrapper[4947]: I0125 00:13:37.568183 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:13:37 crc kubenswrapper[4947]: I0125 00:13:37.574619 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:13:38 crc kubenswrapper[4947]: I0125 00:13:38.113918 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:38 crc kubenswrapper[4947]: I0125 00:13:38.117214 4947 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="373dcecb-6344-4f80-99be-fca1af842f3d" Jan 25 00:13:38 crc kubenswrapper[4947]: I0125 00:13:38.189907 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:13:38 crc kubenswrapper[4947]: I0125 00:13:38.190036 4947 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c534de12-4879-4815-adf1-b14e38021e2b" Jan 25 00:13:38 crc kubenswrapper[4947]: I0125 00:13:38.190065 4947 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c534de12-4879-4815-adf1-b14e38021e2b" Jan 25 00:13:39 crc kubenswrapper[4947]: I0125 00:13:39.196965 4947 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c534de12-4879-4815-adf1-b14e38021e2b" Jan 25 00:13:39 crc kubenswrapper[4947]: I0125 00:13:39.197020 4947 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c534de12-4879-4815-adf1-b14e38021e2b" Jan 25 00:13:41 crc kubenswrapper[4947]: I0125 00:13:41.109926 4947 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="373dcecb-6344-4f80-99be-fca1af842f3d" Jan 25 00:13:45 crc kubenswrapper[4947]: I0125 00:13:45.366672 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 25 00:13:46 crc kubenswrapper[4947]: I0125 00:13:46.782898 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 25 00:13:46 crc kubenswrapper[4947]: I0125 00:13:46.885373 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 25 00:13:47 crc kubenswrapper[4947]: I0125 00:13:47.569550 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 25 00:13:47 crc kubenswrapper[4947]: I0125 00:13:47.668176 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 25 00:13:47 crc kubenswrapper[4947]: I0125 00:13:47.679180 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 25 00:13:47 crc kubenswrapper[4947]: I0125 00:13:47.716568 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 25 00:13:47 crc kubenswrapper[4947]: I0125 00:13:47.726021 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 25 00:13:48 crc kubenswrapper[4947]: I0125 00:13:48.057912 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 25 00:13:48 crc kubenswrapper[4947]: I0125 00:13:48.312441 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 25 00:13:48 crc kubenswrapper[4947]: I0125 00:13:48.364633 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 25 00:13:48 crc kubenswrapper[4947]: I0125 00:13:48.418319 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:13:48 crc kubenswrapper[4947]: I0125 00:13:48.798026 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 25 00:13:48 crc kubenswrapper[4947]: I0125 00:13:48.912759 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 25 00:13:49 crc kubenswrapper[4947]: I0125 00:13:49.056045 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 25 00:13:49 crc kubenswrapper[4947]: I0125 00:13:49.067754 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 25 00:13:49 crc kubenswrapper[4947]: I0125 00:13:49.117601 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 25 00:13:49 crc kubenswrapper[4947]: I0125 00:13:49.144802 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 25 00:13:49 crc kubenswrapper[4947]: I0125 00:13:49.154610 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 25 00:13:49 crc kubenswrapper[4947]: I0125 00:13:49.236088 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 25 00:13:49 crc kubenswrapper[4947]: I0125 00:13:49.535914 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 25 00:13:49 crc kubenswrapper[4947]: I0125 00:13:49.687507 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 25 00:13:49 crc kubenswrapper[4947]: I0125 00:13:49.730232 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 25 00:13:49 crc kubenswrapper[4947]: I0125 00:13:49.750193 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 25 00:13:49 crc kubenswrapper[4947]: I0125 00:13:49.767486 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 25 00:13:49 crc kubenswrapper[4947]: I0125 00:13:49.817092 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 25 00:13:49 crc kubenswrapper[4947]: I0125 00:13:49.875053 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 25 00:13:49 crc kubenswrapper[4947]: I0125 00:13:49.904368 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 25 00:13:50 crc kubenswrapper[4947]: I0125 00:13:50.002981 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 25 00:13:50 crc kubenswrapper[4947]: I0125 00:13:50.024069 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 25 00:13:50 crc kubenswrapper[4947]: I0125 00:13:50.024318 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 25 00:13:50 crc kubenswrapper[4947]: I0125 00:13:50.055999 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 25 00:13:50 crc kubenswrapper[4947]: I0125 00:13:50.262517 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 25 00:13:50 crc kubenswrapper[4947]: I0125 00:13:50.314802 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 25 00:13:50 crc kubenswrapper[4947]: I0125 00:13:50.315566 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 25 00:13:50 crc kubenswrapper[4947]: I0125 00:13:50.336904 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 25 00:13:50 crc kubenswrapper[4947]: I0125 00:13:50.403718 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 25 00:13:50 crc kubenswrapper[4947]: I0125 00:13:50.574959 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 25 00:13:50 crc kubenswrapper[4947]: I0125 00:13:50.600876 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 25 00:13:50 crc kubenswrapper[4947]: I0125 00:13:50.645599 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 25 00:13:50 crc kubenswrapper[4947]: I0125 00:13:50.664243 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 25 00:13:50 crc kubenswrapper[4947]: I0125 00:13:50.671873 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 25 00:13:50 crc kubenswrapper[4947]: I0125 00:13:50.719310 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 25 00:13:50 crc kubenswrapper[4947]: I0125 00:13:50.722984 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 25 00:13:50 crc kubenswrapper[4947]: I0125 00:13:50.955976 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 25 00:13:51 crc kubenswrapper[4947]: I0125 00:13:51.116782 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 25 00:13:51 crc kubenswrapper[4947]: I0125 00:13:51.187711 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 25 00:13:51 crc kubenswrapper[4947]: I0125 00:13:51.222775 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 25 00:13:51 crc kubenswrapper[4947]: I0125 00:13:51.333847 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 25 00:13:51 crc kubenswrapper[4947]: I0125 00:13:51.472627 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 25 00:13:51 crc kubenswrapper[4947]: I0125 00:13:51.526744 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 25 00:13:51 crc kubenswrapper[4947]: I0125 00:13:51.591470 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 25 00:13:51 crc kubenswrapper[4947]: I0125 00:13:51.761911 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 25 00:13:51 crc kubenswrapper[4947]: I0125 00:13:51.800915 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 25 00:13:51 crc kubenswrapper[4947]: I0125 00:13:51.989315 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 25 00:13:52 crc kubenswrapper[4947]: I0125 00:13:52.221635 4947 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 25 00:13:52 crc kubenswrapper[4947]: I0125 00:13:52.288082 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 25 00:13:52 crc kubenswrapper[4947]: I0125 00:13:52.425884 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 25 00:13:52 crc kubenswrapper[4947]: I0125 00:13:52.485815 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 25 00:13:52 crc kubenswrapper[4947]: I0125 00:13:52.493081 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 25 00:13:52 crc kubenswrapper[4947]: I0125 00:13:52.527957 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 25 00:13:52 crc kubenswrapper[4947]: I0125 00:13:52.591261 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 25 00:13:52 crc kubenswrapper[4947]: I0125 00:13:52.603737 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 25 00:13:52 crc kubenswrapper[4947]: I0125 00:13:52.617730 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 25 00:13:52 crc kubenswrapper[4947]: I0125 00:13:52.628327 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 25 00:13:52 crc kubenswrapper[4947]: I0125 00:13:52.636997 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 25 00:13:52 crc kubenswrapper[4947]: I0125 00:13:52.637040 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 25 00:13:52 crc kubenswrapper[4947]: I0125 00:13:52.667195 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 25 00:13:52 crc kubenswrapper[4947]: I0125 00:13:52.732984 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 25 00:13:52 crc kubenswrapper[4947]: I0125 00:13:52.937176 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 25 00:13:52 crc kubenswrapper[4947]: I0125 00:13:52.963617 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 25 00:13:52 crc kubenswrapper[4947]: I0125 00:13:52.984242 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 25 00:13:53 crc kubenswrapper[4947]: I0125 00:13:53.103382 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 25 00:13:53 crc kubenswrapper[4947]: I0125 00:13:53.226863 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 25 00:13:53 crc kubenswrapper[4947]: I0125 00:13:53.293739 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 25 00:13:53 crc kubenswrapper[4947]: I0125 00:13:53.370318 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 25 00:13:53 crc kubenswrapper[4947]: I0125 00:13:53.423818 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 25 00:13:53 crc kubenswrapper[4947]: I0125 00:13:53.502797 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 25 00:13:53 crc kubenswrapper[4947]: I0125 00:13:53.530870 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 25 00:13:53 crc kubenswrapper[4947]: I0125 00:13:53.636193 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 25 00:13:53 crc kubenswrapper[4947]: I0125 00:13:53.665347 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 25 00:13:53 crc kubenswrapper[4947]: I0125 00:13:53.773835 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 25 00:13:53 crc kubenswrapper[4947]: I0125 00:13:53.825036 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 25 00:13:53 crc kubenswrapper[4947]: I0125 00:13:53.856091 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 25 00:13:53 crc kubenswrapper[4947]: I0125 00:13:53.987878 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.025821 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.082730 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.108586 4947 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.175873 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.215149 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.291057 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.325254 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.394074 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.420021 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.425286 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.432993 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.439688 4947 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.443309 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=38.443282819 podStartE2EDuration="38.443282819s" podCreationTimestamp="2026-01-25 00:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:13:36.128102126 +0000 UTC m=+255.361092606" watchObservedRunningTime="2026-01-25 00:13:54.443282819 +0000 UTC m=+273.676273269" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.446468 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dmsjj","openshift-kube-apiserver/kube-apiserver-crc"] Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.446545 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.453741 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.457029 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.461690 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.476329 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=17.476305814 podStartE2EDuration="17.476305814s" podCreationTimestamp="2026-01-25 00:13:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:13:54.470016764 +0000 UTC m=+273.703007204" watchObservedRunningTime="2026-01-25 00:13:54.476305814 +0000 UTC m=+273.709296254" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.496811 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.502397 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.591271 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.624283 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.713218 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.917549 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 25 00:13:55 crc kubenswrapper[4947]: I0125 00:13:55.061234 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 25 00:13:55 crc kubenswrapper[4947]: I0125 00:13:55.101339 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3a733c1-a1cf-42ef-a056-27185292354f" path="/var/lib/kubelet/pods/d3a733c1-a1cf-42ef-a056-27185292354f/volumes" Jan 25 00:13:55 crc kubenswrapper[4947]: I0125 00:13:55.254110 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 25 00:13:55 crc kubenswrapper[4947]: I0125 00:13:55.262409 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 25 00:13:55 crc kubenswrapper[4947]: I0125 00:13:55.355545 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 25 00:13:55 crc kubenswrapper[4947]: I0125 00:13:55.394506 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 25 00:13:55 crc kubenswrapper[4947]: I0125 00:13:55.468688 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 25 00:13:55 crc kubenswrapper[4947]: I0125 00:13:55.528309 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 25 00:13:55 crc kubenswrapper[4947]: I0125 00:13:55.565381 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 25 00:13:55 crc kubenswrapper[4947]: I0125 00:13:55.657749 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 25 00:13:55 crc kubenswrapper[4947]: I0125 00:13:55.662583 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 25 00:13:55 crc kubenswrapper[4947]: I0125 00:13:55.763829 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 25 00:13:55 crc kubenswrapper[4947]: I0125 00:13:55.813304 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 25 00:13:55 crc kubenswrapper[4947]: I0125 00:13:55.813465 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 25 00:13:55 crc kubenswrapper[4947]: I0125 00:13:55.822664 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 25 00:13:55 crc kubenswrapper[4947]: I0125 00:13:55.855601 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 25 00:13:55 crc kubenswrapper[4947]: I0125 00:13:55.917297 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.077647 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.099981 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.194678 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.241570 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.338857 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.385682 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.420481 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.443144 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.452015 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.592758 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.597044 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.632223 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.661697 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.697262 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.774891 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.845651 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.857627 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.887267 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.933723 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.960486 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.974423 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.035595 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.064834 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.119719 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.151554 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.316657 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.340818 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.351818 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.386650 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.406282 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.431821 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.445164 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.596305 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.675547 4947 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.721261 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.744420 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.776666 4947 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.947569 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.961636 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.962196 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.995689 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.049749 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.112009 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.144803 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.199549 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.219010 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.258314 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.347011 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.363904 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.430054 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.430264 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.454428 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.529745 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.535571 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.549852 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.550758 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.651199 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.675256 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.676571 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.702938 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.776504 4947 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.776730 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://87af76d9cedf9765995fdba192251417d6cb96cc8dbaac5f8d89ebd77523cb24" gracePeriod=5 Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.894890 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.908966 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 25 00:13:59 crc kubenswrapper[4947]: I0125 00:13:59.003672 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 25 00:13:59 crc kubenswrapper[4947]: I0125 00:13:59.024063 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 25 00:13:59 crc kubenswrapper[4947]: I0125 00:13:59.035635 4947 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 25 00:13:59 crc kubenswrapper[4947]: I0125 00:13:59.209099 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 25 00:13:59 crc kubenswrapper[4947]: I0125 00:13:59.239259 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 25 00:13:59 crc kubenswrapper[4947]: I0125 00:13:59.349423 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 25 00:13:59 crc kubenswrapper[4947]: I0125 00:13:59.361860 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 25 00:13:59 crc kubenswrapper[4947]: I0125 00:13:59.369190 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 25 00:13:59 crc kubenswrapper[4947]: I0125 00:13:59.492565 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 25 00:13:59 crc kubenswrapper[4947]: I0125 00:13:59.502003 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 25 00:13:59 crc kubenswrapper[4947]: I0125 00:13:59.578172 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 25 00:13:59 crc kubenswrapper[4947]: I0125 00:13:59.597102 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 25 00:13:59 crc kubenswrapper[4947]: I0125 00:13:59.689113 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 25 00:13:59 crc kubenswrapper[4947]: I0125 00:13:59.696260 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 25 00:13:59 crc kubenswrapper[4947]: I0125 00:13:59.832370 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 25 00:13:59 crc kubenswrapper[4947]: I0125 00:13:59.900225 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 25 00:13:59 crc kubenswrapper[4947]: I0125 00:13:59.952412 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 25 00:13:59 crc kubenswrapper[4947]: I0125 00:13:59.960949 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 25 00:14:00 crc kubenswrapper[4947]: I0125 00:14:00.054379 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 25 00:14:00 crc kubenswrapper[4947]: I0125 00:14:00.087112 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 25 00:14:00 crc kubenswrapper[4947]: I0125 00:14:00.119602 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 25 00:14:00 crc kubenswrapper[4947]: I0125 00:14:00.302237 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 25 00:14:00 crc kubenswrapper[4947]: I0125 00:14:00.323310 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 25 00:14:00 crc kubenswrapper[4947]: I0125 00:14:00.355424 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 25 00:14:00 crc kubenswrapper[4947]: I0125 00:14:00.569936 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 25 00:14:00 crc kubenswrapper[4947]: I0125 00:14:00.576109 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 25 00:14:00 crc kubenswrapper[4947]: I0125 00:14:00.592088 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 25 00:14:00 crc kubenswrapper[4947]: I0125 00:14:00.624523 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 25 00:14:00 crc kubenswrapper[4947]: I0125 00:14:00.693186 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 25 00:14:00 crc kubenswrapper[4947]: I0125 00:14:00.795841 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 25 00:14:00 crc kubenswrapper[4947]: I0125 00:14:00.804078 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 25 00:14:00 crc kubenswrapper[4947]: I0125 00:14:00.877083 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 25 00:14:00 crc kubenswrapper[4947]: I0125 00:14:00.944434 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 25 00:14:01 crc kubenswrapper[4947]: I0125 00:14:01.046844 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 25 00:14:01 crc kubenswrapper[4947]: I0125 00:14:01.104621 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 25 00:14:01 crc kubenswrapper[4947]: I0125 00:14:01.220032 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 25 00:14:01 crc kubenswrapper[4947]: I0125 00:14:01.259460 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 25 00:14:01 crc kubenswrapper[4947]: I0125 00:14:01.416421 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 25 00:14:01 crc kubenswrapper[4947]: I0125 00:14:01.597693 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 25 00:14:01 crc kubenswrapper[4947]: I0125 00:14:01.623075 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 25 00:14:01 crc kubenswrapper[4947]: I0125 00:14:01.859823 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 25 00:14:01 crc kubenswrapper[4947]: I0125 00:14:01.984327 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 25 00:14:02 crc kubenswrapper[4947]: I0125 00:14:02.064546 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 25 00:14:02 crc kubenswrapper[4947]: I0125 00:14:02.112532 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 25 00:14:02 crc kubenswrapper[4947]: I0125 00:14:02.257514 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 25 00:14:02 crc kubenswrapper[4947]: I0125 00:14:02.263787 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 25 00:14:02 crc kubenswrapper[4947]: I0125 00:14:02.791592 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 25 00:14:02 crc kubenswrapper[4947]: I0125 00:14:02.848663 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 25 00:14:03 crc kubenswrapper[4947]: I0125 00:14:03.313237 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 25 00:14:03 crc kubenswrapper[4947]: I0125 00:14:03.554436 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 25 00:14:03 crc kubenswrapper[4947]: I0125 00:14:03.636535 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 25 00:14:03 crc kubenswrapper[4947]: I0125 00:14:03.646839 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 25 00:14:03 crc kubenswrapper[4947]: I0125 00:14:03.836368 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 25 00:14:03 crc kubenswrapper[4947]: I0125 00:14:03.853074 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.357336 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.357576 4947 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="87af76d9cedf9765995fdba192251417d6cb96cc8dbaac5f8d89ebd77523cb24" exitCode=137 Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.357616 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7349a1a4be8bfef6353d3f7647578bb21403b6708364430653b30bd7c8f97d9" Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.371482 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.371581 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.406952 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.502739 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.502897 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.503355 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.503543 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.503695 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.503910 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.503437 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.503585 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.503960 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.504749 4947 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.504904 4947 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.505020 4947 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.505118 4947 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.514998 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.606541 4947 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:05 crc kubenswrapper[4947]: I0125 00:14:05.100345 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 25 00:14:05 crc kubenswrapper[4947]: I0125 00:14:05.100717 4947 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 25 00:14:05 crc kubenswrapper[4947]: I0125 00:14:05.112788 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 25 00:14:05 crc kubenswrapper[4947]: I0125 00:14:05.112842 4947 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="2581c2c9-fa12-4425-9279-8b63dfe7ed94" Jan 25 00:14:05 crc kubenswrapper[4947]: I0125 00:14:05.115413 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 25 00:14:05 crc kubenswrapper[4947]: I0125 00:14:05.115463 4947 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="2581c2c9-fa12-4425-9279-8b63dfe7ed94" Jan 25 00:14:05 crc kubenswrapper[4947]: I0125 00:14:05.378469 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 00:14:05 crc kubenswrapper[4947]: I0125 00:14:05.623934 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bf7b4c587-6xl8t"] Jan 25 00:14:05 crc kubenswrapper[4947]: I0125 00:14:05.624333 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" podUID="312d7f0a-f809-4e61-9ddd-46a5328b297c" containerName="controller-manager" containerID="cri-o://16bf48c77027103a40b98bc529ff47a76d184e9a4ff844a9962a12f32fb22999" gracePeriod=30 Jan 25 00:14:05 crc kubenswrapper[4947]: I0125 00:14:05.629047 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7"] Jan 25 00:14:05 crc kubenswrapper[4947]: I0125 00:14:05.629306 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" podUID="507c2587-a4ff-48cd-8740-800f9e614c65" containerName="route-controller-manager" containerID="cri-o://767798f91b016414bc55cbaa8ecb99f3eaea02413cb909d3fa8466c832339705" gracePeriod=30 Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.389187 4947 generic.go:334] "Generic (PLEG): container finished" podID="312d7f0a-f809-4e61-9ddd-46a5328b297c" containerID="16bf48c77027103a40b98bc529ff47a76d184e9a4ff844a9962a12f32fb22999" exitCode=0 Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.389306 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" event={"ID":"312d7f0a-f809-4e61-9ddd-46a5328b297c","Type":"ContainerDied","Data":"16bf48c77027103a40b98bc529ff47a76d184e9a4ff844a9962a12f32fb22999"} Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.391311 4947 generic.go:334] "Generic (PLEG): container finished" podID="507c2587-a4ff-48cd-8740-800f9e614c65" containerID="767798f91b016414bc55cbaa8ecb99f3eaea02413cb909d3fa8466c832339705" exitCode=0 Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.391425 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" event={"ID":"507c2587-a4ff-48cd-8740-800f9e614c65","Type":"ContainerDied","Data":"767798f91b016414bc55cbaa8ecb99f3eaea02413cb909d3fa8466c832339705"} Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.632993 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.711324 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.741911 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/507c2587-a4ff-48cd-8740-800f9e614c65-serving-cert\") pod \"507c2587-a4ff-48cd-8740-800f9e614c65\" (UID: \"507c2587-a4ff-48cd-8740-800f9e614c65\") " Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.742091 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88fpz\" (UniqueName: \"kubernetes.io/projected/507c2587-a4ff-48cd-8740-800f9e614c65-kube-api-access-88fpz\") pod \"507c2587-a4ff-48cd-8740-800f9e614c65\" (UID: \"507c2587-a4ff-48cd-8740-800f9e614c65\") " Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.742153 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/507c2587-a4ff-48cd-8740-800f9e614c65-config\") pod \"507c2587-a4ff-48cd-8740-800f9e614c65\" (UID: \"507c2587-a4ff-48cd-8740-800f9e614c65\") " Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.742236 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/507c2587-a4ff-48cd-8740-800f9e614c65-client-ca\") pod \"507c2587-a4ff-48cd-8740-800f9e614c65\" (UID: \"507c2587-a4ff-48cd-8740-800f9e614c65\") " Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.743825 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/507c2587-a4ff-48cd-8740-800f9e614c65-client-ca" (OuterVolumeSpecName: "client-ca") pod "507c2587-a4ff-48cd-8740-800f9e614c65" (UID: "507c2587-a4ff-48cd-8740-800f9e614c65"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.748965 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/507c2587-a4ff-48cd-8740-800f9e614c65-config" (OuterVolumeSpecName: "config") pod "507c2587-a4ff-48cd-8740-800f9e614c65" (UID: "507c2587-a4ff-48cd-8740-800f9e614c65"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.750526 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/507c2587-a4ff-48cd-8740-800f9e614c65-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "507c2587-a4ff-48cd-8740-800f9e614c65" (UID: "507c2587-a4ff-48cd-8740-800f9e614c65"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.753317 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/507c2587-a4ff-48cd-8740-800f9e614c65-kube-api-access-88fpz" (OuterVolumeSpecName: "kube-api-access-88fpz") pod "507c2587-a4ff-48cd-8740-800f9e614c65" (UID: "507c2587-a4ff-48cd-8740-800f9e614c65"). InnerVolumeSpecName "kube-api-access-88fpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.843724 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/312d7f0a-f809-4e61-9ddd-46a5328b297c-proxy-ca-bundles\") pod \"312d7f0a-f809-4e61-9ddd-46a5328b297c\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.844039 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312d7f0a-f809-4e61-9ddd-46a5328b297c-config\") pod \"312d7f0a-f809-4e61-9ddd-46a5328b297c\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.844276 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/312d7f0a-f809-4e61-9ddd-46a5328b297c-client-ca\") pod \"312d7f0a-f809-4e61-9ddd-46a5328b297c\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.845176 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9w7j5\" (UniqueName: \"kubernetes.io/projected/312d7f0a-f809-4e61-9ddd-46a5328b297c-kube-api-access-9w7j5\") pod \"312d7f0a-f809-4e61-9ddd-46a5328b297c\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.845079 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/312d7f0a-f809-4e61-9ddd-46a5328b297c-client-ca" (OuterVolumeSpecName: "client-ca") pod "312d7f0a-f809-4e61-9ddd-46a5328b297c" (UID: "312d7f0a-f809-4e61-9ddd-46a5328b297c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.845161 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/312d7f0a-f809-4e61-9ddd-46a5328b297c-config" (OuterVolumeSpecName: "config") pod "312d7f0a-f809-4e61-9ddd-46a5328b297c" (UID: "312d7f0a-f809-4e61-9ddd-46a5328b297c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.845168 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/312d7f0a-f809-4e61-9ddd-46a5328b297c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "312d7f0a-f809-4e61-9ddd-46a5328b297c" (UID: "312d7f0a-f809-4e61-9ddd-46a5328b297c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.845746 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/312d7f0a-f809-4e61-9ddd-46a5328b297c-serving-cert\") pod \"312d7f0a-f809-4e61-9ddd-46a5328b297c\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.846563 4947 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/312d7f0a-f809-4e61-9ddd-46a5328b297c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.846743 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/507c2587-a4ff-48cd-8740-800f9e614c65-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.846850 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312d7f0a-f809-4e61-9ddd-46a5328b297c-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.847041 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88fpz\" (UniqueName: \"kubernetes.io/projected/507c2587-a4ff-48cd-8740-800f9e614c65-kube-api-access-88fpz\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.847372 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/507c2587-a4ff-48cd-8740-800f9e614c65-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.847476 4947 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/312d7f0a-f809-4e61-9ddd-46a5328b297c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.847559 4947 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/507c2587-a4ff-48cd-8740-800f9e614c65-client-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.849045 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/312d7f0a-f809-4e61-9ddd-46a5328b297c-kube-api-access-9w7j5" (OuterVolumeSpecName: "kube-api-access-9w7j5") pod "312d7f0a-f809-4e61-9ddd-46a5328b297c" (UID: "312d7f0a-f809-4e61-9ddd-46a5328b297c"). InnerVolumeSpecName "kube-api-access-9w7j5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.849366 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/312d7f0a-f809-4e61-9ddd-46a5328b297c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "312d7f0a-f809-4e61-9ddd-46a5328b297c" (UID: "312d7f0a-f809-4e61-9ddd-46a5328b297c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.948679 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9w7j5\" (UniqueName: \"kubernetes.io/projected/312d7f0a-f809-4e61-9ddd-46a5328b297c-kube-api-access-9w7j5\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.948732 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/312d7f0a-f809-4e61-9ddd-46a5328b297c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.256190 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7"] Jan 25 00:14:07 crc kubenswrapper[4947]: E0125 00:14:07.256713 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3a733c1-a1cf-42ef-a056-27185292354f" containerName="oauth-openshift" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.256757 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3a733c1-a1cf-42ef-a056-27185292354f" containerName="oauth-openshift" Jan 25 00:14:07 crc kubenswrapper[4947]: E0125 00:14:07.256801 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="312d7f0a-f809-4e61-9ddd-46a5328b297c" containerName="controller-manager" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.256821 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="312d7f0a-f809-4e61-9ddd-46a5328b297c" containerName="controller-manager" Jan 25 00:14:07 crc kubenswrapper[4947]: E0125 00:14:07.256847 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="507c2587-a4ff-48cd-8740-800f9e614c65" containerName="route-controller-manager" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.256867 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="507c2587-a4ff-48cd-8740-800f9e614c65" containerName="route-controller-manager" Jan 25 00:14:07 crc kubenswrapper[4947]: E0125 00:14:07.256899 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.256917 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 25 00:14:07 crc kubenswrapper[4947]: E0125 00:14:07.256944 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3305e0ba-7064-415c-bbaa-bdc630d95e40" containerName="installer" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.258064 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="3305e0ba-7064-415c-bbaa-bdc630d95e40" containerName="installer" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.258380 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="312d7f0a-f809-4e61-9ddd-46a5328b297c" containerName="controller-manager" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.258431 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.258459 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="3305e0ba-7064-415c-bbaa-bdc630d95e40" containerName="installer" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.258492 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3a733c1-a1cf-42ef-a056-27185292354f" containerName="oauth-openshift" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.258520 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="507c2587-a4ff-48cd-8740-800f9e614c65" containerName="route-controller-manager" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.259263 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.272626 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7"] Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.353908 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/224644f1-a5e3-4fa5-8c1c-97030c1796c5-client-ca\") pod \"route-controller-manager-69f6879854-4jdh7\" (UID: \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.353983 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/224644f1-a5e3-4fa5-8c1c-97030c1796c5-serving-cert\") pod \"route-controller-manager-69f6879854-4jdh7\" (UID: \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.354030 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/224644f1-a5e3-4fa5-8c1c-97030c1796c5-config\") pod \"route-controller-manager-69f6879854-4jdh7\" (UID: \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.354066 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbndx\" (UniqueName: \"kubernetes.io/projected/224644f1-a5e3-4fa5-8c1c-97030c1796c5-kube-api-access-vbndx\") pod \"route-controller-manager-69f6879854-4jdh7\" (UID: \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.402429 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" event={"ID":"507c2587-a4ff-48cd-8740-800f9e614c65","Type":"ContainerDied","Data":"e3712692a024a59d8c1f6eeedf100b94d93a2bc93180129c8667faed18062f64"} Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.402448 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.402565 4947 scope.go:117] "RemoveContainer" containerID="767798f91b016414bc55cbaa8ecb99f3eaea02413cb909d3fa8466c832339705" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.405708 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" event={"ID":"312d7f0a-f809-4e61-9ddd-46a5328b297c","Type":"ContainerDied","Data":"d19e2f3886646df15934b672a149e21934ab3e0f233ab59f1b6c04edeba54de9"} Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.406471 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.429252 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bf7b4c587-6xl8t"] Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.435475 4947 scope.go:117] "RemoveContainer" containerID="16bf48c77027103a40b98bc529ff47a76d184e9a4ff844a9962a12f32fb22999" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.446045 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-bf7b4c587-6xl8t"] Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.454189 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7"] Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.455252 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/224644f1-a5e3-4fa5-8c1c-97030c1796c5-config\") pod \"route-controller-manager-69f6879854-4jdh7\" (UID: \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.455319 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbndx\" (UniqueName: \"kubernetes.io/projected/224644f1-a5e3-4fa5-8c1c-97030c1796c5-kube-api-access-vbndx\") pod \"route-controller-manager-69f6879854-4jdh7\" (UID: \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.455370 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/224644f1-a5e3-4fa5-8c1c-97030c1796c5-client-ca\") pod \"route-controller-manager-69f6879854-4jdh7\" (UID: \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.455442 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/224644f1-a5e3-4fa5-8c1c-97030c1796c5-serving-cert\") pod \"route-controller-manager-69f6879854-4jdh7\" (UID: \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.457236 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/224644f1-a5e3-4fa5-8c1c-97030c1796c5-config\") pod \"route-controller-manager-69f6879854-4jdh7\" (UID: \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.457887 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/224644f1-a5e3-4fa5-8c1c-97030c1796c5-client-ca\") pod \"route-controller-manager-69f6879854-4jdh7\" (UID: \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.461488 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/224644f1-a5e3-4fa5-8c1c-97030c1796c5-serving-cert\") pod \"route-controller-manager-69f6879854-4jdh7\" (UID: \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.465780 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7"] Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.484824 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbndx\" (UniqueName: \"kubernetes.io/projected/224644f1-a5e3-4fa5-8c1c-97030c1796c5-kube-api-access-vbndx\") pod \"route-controller-manager-69f6879854-4jdh7\" (UID: \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.585312 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" Jan 25 00:14:08 crc kubenswrapper[4947]: I0125 00:14:08.050576 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7"] Jan 25 00:14:08 crc kubenswrapper[4947]: I0125 00:14:08.416741 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" event={"ID":"224644f1-a5e3-4fa5-8c1c-97030c1796c5","Type":"ContainerStarted","Data":"de4c2ecf8eb9a56a9d6154251e9f5d91fab263e83e80b6d2c8953f5975d31272"} Jan 25 00:14:08 crc kubenswrapper[4947]: I0125 00:14:08.416814 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" event={"ID":"224644f1-a5e3-4fa5-8c1c-97030c1796c5","Type":"ContainerStarted","Data":"a2cf4520fc74b19c2f49a3aa7b17652852b6f0732aaacfb718e26a7117d8c7ee"} Jan 25 00:14:08 crc kubenswrapper[4947]: I0125 00:14:08.417940 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" Jan 25 00:14:08 crc kubenswrapper[4947]: I0125 00:14:08.803116 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" Jan 25 00:14:08 crc kubenswrapper[4947]: I0125 00:14:08.839108 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" podStartSLOduration=3.83907706 podStartE2EDuration="3.83907706s" podCreationTimestamp="2026-01-25 00:14:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:14:08.442673852 +0000 UTC m=+287.675664332" watchObservedRunningTime="2026-01-25 00:14:08.83907706 +0000 UTC m=+288.072067540" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.105636 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="312d7f0a-f809-4e61-9ddd-46a5328b297c" path="/var/lib/kubelet/pods/312d7f0a-f809-4e61-9ddd-46a5328b297c/volumes" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.106763 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="507c2587-a4ff-48cd-8740-800f9e614c65" path="/var/lib/kubelet/pods/507c2587-a4ff-48cd-8740-800f9e614c65/volumes" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.252090 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6cb4bd4595-qvv49"] Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.253290 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.257385 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.257479 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.258375 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.259018 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.260756 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.265210 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.270410 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.273651 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cb4bd4595-qvv49"] Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.408850 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38b86fbe-129d-444e-b66d-d5cfcdfe502d-client-ca\") pod \"controller-manager-6cb4bd4595-qvv49\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.408933 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/38b86fbe-129d-444e-b66d-d5cfcdfe502d-proxy-ca-bundles\") pod \"controller-manager-6cb4bd4595-qvv49\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.409117 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38b86fbe-129d-444e-b66d-d5cfcdfe502d-config\") pod \"controller-manager-6cb4bd4595-qvv49\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.409231 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38b86fbe-129d-444e-b66d-d5cfcdfe502d-serving-cert\") pod \"controller-manager-6cb4bd4595-qvv49\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.409297 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xksg2\" (UniqueName: \"kubernetes.io/projected/38b86fbe-129d-444e-b66d-d5cfcdfe502d-kube-api-access-xksg2\") pod \"controller-manager-6cb4bd4595-qvv49\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.510424 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38b86fbe-129d-444e-b66d-d5cfcdfe502d-client-ca\") pod \"controller-manager-6cb4bd4595-qvv49\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.510481 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/38b86fbe-129d-444e-b66d-d5cfcdfe502d-proxy-ca-bundles\") pod \"controller-manager-6cb4bd4595-qvv49\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.510579 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38b86fbe-129d-444e-b66d-d5cfcdfe502d-config\") pod \"controller-manager-6cb4bd4595-qvv49\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.510624 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38b86fbe-129d-444e-b66d-d5cfcdfe502d-serving-cert\") pod \"controller-manager-6cb4bd4595-qvv49\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.510661 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xksg2\" (UniqueName: \"kubernetes.io/projected/38b86fbe-129d-444e-b66d-d5cfcdfe502d-kube-api-access-xksg2\") pod \"controller-manager-6cb4bd4595-qvv49\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.512853 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/38b86fbe-129d-444e-b66d-d5cfcdfe502d-proxy-ca-bundles\") pod \"controller-manager-6cb4bd4595-qvv49\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.514116 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38b86fbe-129d-444e-b66d-d5cfcdfe502d-client-ca\") pod \"controller-manager-6cb4bd4595-qvv49\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.515349 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38b86fbe-129d-444e-b66d-d5cfcdfe502d-config\") pod \"controller-manager-6cb4bd4595-qvv49\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.522854 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38b86fbe-129d-444e-b66d-d5cfcdfe502d-serving-cert\") pod \"controller-manager-6cb4bd4595-qvv49\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.539931 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xksg2\" (UniqueName: \"kubernetes.io/projected/38b86fbe-129d-444e-b66d-d5cfcdfe502d-kube-api-access-xksg2\") pod \"controller-manager-6cb4bd4595-qvv49\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.585711 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.831876 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cb4bd4595-qvv49"] Jan 25 00:14:09 crc kubenswrapper[4947]: W0125 00:14:09.840000 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38b86fbe_129d_444e_b66d_d5cfcdfe502d.slice/crio-559e97a56a39b8472fae5df4f9cc2883d13427ca4056db050883156ced694882 WatchSource:0}: Error finding container 559e97a56a39b8472fae5df4f9cc2883d13427ca4056db050883156ced694882: Status 404 returned error can't find the container with id 559e97a56a39b8472fae5df4f9cc2883d13427ca4056db050883156ced694882 Jan 25 00:14:10 crc kubenswrapper[4947]: I0125 00:14:10.433787 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" event={"ID":"38b86fbe-129d-444e-b66d-d5cfcdfe502d","Type":"ContainerStarted","Data":"3469a54ff398bc64c543cc9ff2040987e444482e29fde28bd78f015129ddf44a"} Jan 25 00:14:10 crc kubenswrapper[4947]: I0125 00:14:10.433877 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" event={"ID":"38b86fbe-129d-444e-b66d-d5cfcdfe502d","Type":"ContainerStarted","Data":"559e97a56a39b8472fae5df4f9cc2883d13427ca4056db050883156ced694882"} Jan 25 00:14:10 crc kubenswrapper[4947]: I0125 00:14:10.434191 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:10 crc kubenswrapper[4947]: I0125 00:14:10.440523 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:10 crc kubenswrapper[4947]: I0125 00:14:10.474093 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" podStartSLOduration=5.474061097 podStartE2EDuration="5.474061097s" podCreationTimestamp="2026-01-25 00:14:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:14:10.454757843 +0000 UTC m=+289.687748293" watchObservedRunningTime="2026-01-25 00:14:10.474061097 +0000 UTC m=+289.707051567" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.251228 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-db689bc6b-lvd9p"] Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.252305 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.263521 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.263621 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.263871 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.265986 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.266394 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.266608 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.266684 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.266823 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.266856 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.267052 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.268812 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.270375 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.271400 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-db689bc6b-lvd9p"] Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.317557 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.318871 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.322557 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.343565 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-user-template-login\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.343605 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-service-ca\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.343632 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-session\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.343651 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.343665 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt4n5\" (UniqueName: \"kubernetes.io/projected/c5bbb081-ae69-4597-980e-2163cb2b1208-kube-api-access-vt4n5\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.343685 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c5bbb081-ae69-4597-980e-2163cb2b1208-audit-policies\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.343712 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-user-template-error\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.343736 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.343752 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-router-certs\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.343774 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.343795 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c5bbb081-ae69-4597-980e-2163cb2b1208-audit-dir\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.343816 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-cliconfig\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.343831 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.343849 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-serving-cert\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.445201 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-session\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.445240 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt4n5\" (UniqueName: \"kubernetes.io/projected/c5bbb081-ae69-4597-980e-2163cb2b1208-kube-api-access-vt4n5\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.445265 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.445293 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c5bbb081-ae69-4597-980e-2163cb2b1208-audit-policies\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.445331 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-user-template-error\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.445364 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.445386 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-router-certs\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.445420 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.445449 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c5bbb081-ae69-4597-980e-2163cb2b1208-audit-dir\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.445474 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-cliconfig\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.445497 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.445524 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-serving-cert\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.445553 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-user-template-login\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.445583 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-service-ca\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.446070 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c5bbb081-ae69-4597-980e-2163cb2b1208-audit-dir\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.448601 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c5bbb081-ae69-4597-980e-2163cb2b1208-audit-policies\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.448753 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-cliconfig\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.449219 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.449554 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-service-ca\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.454538 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-router-certs\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.456983 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.460215 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-session\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.460354 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.460619 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-serving-cert\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.461189 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-user-template-error\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.461902 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.462475 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-user-template-login\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.466593 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt4n5\" (UniqueName: \"kubernetes.io/projected/c5bbb081-ae69-4597-980e-2163cb2b1208-kube-api-access-vt4n5\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.623988 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.868824 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-db689bc6b-lvd9p"] Jan 25 00:14:11 crc kubenswrapper[4947]: W0125 00:14:11.876554 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5bbb081_ae69_4597_980e_2163cb2b1208.slice/crio-98969cd876455b78cfd35c9d783acac8e24c560e8feb314cb6beca37001ffe3c WatchSource:0}: Error finding container 98969cd876455b78cfd35c9d783acac8e24c560e8feb314cb6beca37001ffe3c: Status 404 returned error can't find the container with id 98969cd876455b78cfd35c9d783acac8e24c560e8feb314cb6beca37001ffe3c Jan 25 00:14:12 crc kubenswrapper[4947]: I0125 00:14:12.445996 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" event={"ID":"c5bbb081-ae69-4597-980e-2163cb2b1208","Type":"ContainerStarted","Data":"94f1cd3955178649dfbab7bf5027e159b4e1e3f66d3d5dce5543cbd66c0643dd"} Jan 25 00:14:12 crc kubenswrapper[4947]: I0125 00:14:12.446050 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" event={"ID":"c5bbb081-ae69-4597-980e-2163cb2b1208","Type":"ContainerStarted","Data":"98969cd876455b78cfd35c9d783acac8e24c560e8feb314cb6beca37001ffe3c"} Jan 25 00:14:12 crc kubenswrapper[4947]: I0125 00:14:12.466463 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" podStartSLOduration=64.466442463 podStartE2EDuration="1m4.466442463s" podCreationTimestamp="2026-01-25 00:13:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:14:12.464605947 +0000 UTC m=+291.697596387" watchObservedRunningTime="2026-01-25 00:14:12.466442463 +0000 UTC m=+291.699432903" Jan 25 00:14:13 crc kubenswrapper[4947]: I0125 00:14:13.452527 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:13 crc kubenswrapper[4947]: I0125 00:14:13.459727 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:18 crc kubenswrapper[4947]: I0125 00:14:18.485179 4947 generic.go:334] "Generic (PLEG): container finished" podID="fa35d682-53d6-4191-9cf9-f48b9f74e858" containerID="7a4ab516c4e08d71314a18aa4d477dfc08e5a573c06117757b88ea461a13c369" exitCode=0 Jan 25 00:14:18 crc kubenswrapper[4947]: I0125 00:14:18.485273 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" event={"ID":"fa35d682-53d6-4191-9cf9-f48b9f74e858","Type":"ContainerDied","Data":"7a4ab516c4e08d71314a18aa4d477dfc08e5a573c06117757b88ea461a13c369"} Jan 25 00:14:18 crc kubenswrapper[4947]: I0125 00:14:18.486330 4947 scope.go:117] "RemoveContainer" containerID="7a4ab516c4e08d71314a18aa4d477dfc08e5a573c06117757b88ea461a13c369" Jan 25 00:14:19 crc kubenswrapper[4947]: I0125 00:14:19.496743 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" event={"ID":"fa35d682-53d6-4191-9cf9-f48b9f74e858","Type":"ContainerStarted","Data":"c76cf10a240dd3e6abf0e6c61fc65ab4c06d545e02cbd13850e4b3a1505a6b8e"} Jan 25 00:14:19 crc kubenswrapper[4947]: I0125 00:14:19.498040 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" Jan 25 00:14:19 crc kubenswrapper[4947]: I0125 00:14:19.500853 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" Jan 25 00:14:20 crc kubenswrapper[4947]: I0125 00:14:20.926456 4947 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 25 00:14:25 crc kubenswrapper[4947]: I0125 00:14:25.558890 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6cb4bd4595-qvv49"] Jan 25 00:14:25 crc kubenswrapper[4947]: I0125 00:14:25.559414 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" podUID="38b86fbe-129d-444e-b66d-d5cfcdfe502d" containerName="controller-manager" containerID="cri-o://3469a54ff398bc64c543cc9ff2040987e444482e29fde28bd78f015129ddf44a" gracePeriod=30 Jan 25 00:14:25 crc kubenswrapper[4947]: I0125 00:14:25.575442 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7"] Jan 25 00:14:25 crc kubenswrapper[4947]: I0125 00:14:25.575688 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" podUID="224644f1-a5e3-4fa5-8c1c-97030c1796c5" containerName="route-controller-manager" containerID="cri-o://de4c2ecf8eb9a56a9d6154251e9f5d91fab263e83e80b6d2c8953f5975d31272" gracePeriod=30 Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.141336 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.196949 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.274493 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/224644f1-a5e3-4fa5-8c1c-97030c1796c5-client-ca\") pod \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\" (UID: \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\") " Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.274620 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/224644f1-a5e3-4fa5-8c1c-97030c1796c5-serving-cert\") pod \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\" (UID: \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\") " Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.274725 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/224644f1-a5e3-4fa5-8c1c-97030c1796c5-config\") pod \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\" (UID: \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\") " Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.274937 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbndx\" (UniqueName: \"kubernetes.io/projected/224644f1-a5e3-4fa5-8c1c-97030c1796c5-kube-api-access-vbndx\") pod \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\" (UID: \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\") " Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.275325 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/224644f1-a5e3-4fa5-8c1c-97030c1796c5-client-ca" (OuterVolumeSpecName: "client-ca") pod "224644f1-a5e3-4fa5-8c1c-97030c1796c5" (UID: "224644f1-a5e3-4fa5-8c1c-97030c1796c5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.275707 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/224644f1-a5e3-4fa5-8c1c-97030c1796c5-config" (OuterVolumeSpecName: "config") pod "224644f1-a5e3-4fa5-8c1c-97030c1796c5" (UID: "224644f1-a5e3-4fa5-8c1c-97030c1796c5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.280963 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/224644f1-a5e3-4fa5-8c1c-97030c1796c5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "224644f1-a5e3-4fa5-8c1c-97030c1796c5" (UID: "224644f1-a5e3-4fa5-8c1c-97030c1796c5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.282955 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/224644f1-a5e3-4fa5-8c1c-97030c1796c5-kube-api-access-vbndx" (OuterVolumeSpecName: "kube-api-access-vbndx") pod "224644f1-a5e3-4fa5-8c1c-97030c1796c5" (UID: "224644f1-a5e3-4fa5-8c1c-97030c1796c5"). InnerVolumeSpecName "kube-api-access-vbndx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.376626 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xksg2\" (UniqueName: \"kubernetes.io/projected/38b86fbe-129d-444e-b66d-d5cfcdfe502d-kube-api-access-xksg2\") pod \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.376758 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38b86fbe-129d-444e-b66d-d5cfcdfe502d-client-ca\") pod \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.376802 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38b86fbe-129d-444e-b66d-d5cfcdfe502d-config\") pod \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.376870 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38b86fbe-129d-444e-b66d-d5cfcdfe502d-serving-cert\") pod \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.376950 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/38b86fbe-129d-444e-b66d-d5cfcdfe502d-proxy-ca-bundles\") pod \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.377451 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/224644f1-a5e3-4fa5-8c1c-97030c1796c5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.377504 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/224644f1-a5e3-4fa5-8c1c-97030c1796c5-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.377532 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbndx\" (UniqueName: \"kubernetes.io/projected/224644f1-a5e3-4fa5-8c1c-97030c1796c5-kube-api-access-vbndx\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.377558 4947 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/224644f1-a5e3-4fa5-8c1c-97030c1796c5-client-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.378559 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38b86fbe-129d-444e-b66d-d5cfcdfe502d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "38b86fbe-129d-444e-b66d-d5cfcdfe502d" (UID: "38b86fbe-129d-444e-b66d-d5cfcdfe502d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.378586 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38b86fbe-129d-444e-b66d-d5cfcdfe502d-client-ca" (OuterVolumeSpecName: "client-ca") pod "38b86fbe-129d-444e-b66d-d5cfcdfe502d" (UID: "38b86fbe-129d-444e-b66d-d5cfcdfe502d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.378667 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38b86fbe-129d-444e-b66d-d5cfcdfe502d-config" (OuterVolumeSpecName: "config") pod "38b86fbe-129d-444e-b66d-d5cfcdfe502d" (UID: "38b86fbe-129d-444e-b66d-d5cfcdfe502d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.381120 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38b86fbe-129d-444e-b66d-d5cfcdfe502d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "38b86fbe-129d-444e-b66d-d5cfcdfe502d" (UID: "38b86fbe-129d-444e-b66d-d5cfcdfe502d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.381423 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38b86fbe-129d-444e-b66d-d5cfcdfe502d-kube-api-access-xksg2" (OuterVolumeSpecName: "kube-api-access-xksg2") pod "38b86fbe-129d-444e-b66d-d5cfcdfe502d" (UID: "38b86fbe-129d-444e-b66d-d5cfcdfe502d"). InnerVolumeSpecName "kube-api-access-xksg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.479611 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38b86fbe-129d-444e-b66d-d5cfcdfe502d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.479696 4947 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/38b86fbe-129d-444e-b66d-d5cfcdfe502d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.479723 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xksg2\" (UniqueName: \"kubernetes.io/projected/38b86fbe-129d-444e-b66d-d5cfcdfe502d-kube-api-access-xksg2\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.479744 4947 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38b86fbe-129d-444e-b66d-d5cfcdfe502d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.479765 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38b86fbe-129d-444e-b66d-d5cfcdfe502d-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.548996 4947 generic.go:334] "Generic (PLEG): container finished" podID="38b86fbe-129d-444e-b66d-d5cfcdfe502d" containerID="3469a54ff398bc64c543cc9ff2040987e444482e29fde28bd78f015129ddf44a" exitCode=0 Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.549225 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" event={"ID":"38b86fbe-129d-444e-b66d-d5cfcdfe502d","Type":"ContainerDied","Data":"3469a54ff398bc64c543cc9ff2040987e444482e29fde28bd78f015129ddf44a"} Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.549248 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.549272 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" event={"ID":"38b86fbe-129d-444e-b66d-d5cfcdfe502d","Type":"ContainerDied","Data":"559e97a56a39b8472fae5df4f9cc2883d13427ca4056db050883156ced694882"} Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.549314 4947 scope.go:117] "RemoveContainer" containerID="3469a54ff398bc64c543cc9ff2040987e444482e29fde28bd78f015129ddf44a" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.559971 4947 generic.go:334] "Generic (PLEG): container finished" podID="224644f1-a5e3-4fa5-8c1c-97030c1796c5" containerID="de4c2ecf8eb9a56a9d6154251e9f5d91fab263e83e80b6d2c8953f5975d31272" exitCode=0 Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.560032 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" event={"ID":"224644f1-a5e3-4fa5-8c1c-97030c1796c5","Type":"ContainerDied","Data":"de4c2ecf8eb9a56a9d6154251e9f5d91fab263e83e80b6d2c8953f5975d31272"} Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.560068 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" event={"ID":"224644f1-a5e3-4fa5-8c1c-97030c1796c5","Type":"ContainerDied","Data":"a2cf4520fc74b19c2f49a3aa7b17652852b6f0732aaacfb718e26a7117d8c7ee"} Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.560203 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.598415 4947 scope.go:117] "RemoveContainer" containerID="3469a54ff398bc64c543cc9ff2040987e444482e29fde28bd78f015129ddf44a" Jan 25 00:14:26 crc kubenswrapper[4947]: E0125 00:14:26.599059 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3469a54ff398bc64c543cc9ff2040987e444482e29fde28bd78f015129ddf44a\": container with ID starting with 3469a54ff398bc64c543cc9ff2040987e444482e29fde28bd78f015129ddf44a not found: ID does not exist" containerID="3469a54ff398bc64c543cc9ff2040987e444482e29fde28bd78f015129ddf44a" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.599101 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3469a54ff398bc64c543cc9ff2040987e444482e29fde28bd78f015129ddf44a"} err="failed to get container status \"3469a54ff398bc64c543cc9ff2040987e444482e29fde28bd78f015129ddf44a\": rpc error: code = NotFound desc = could not find container \"3469a54ff398bc64c543cc9ff2040987e444482e29fde28bd78f015129ddf44a\": container with ID starting with 3469a54ff398bc64c543cc9ff2040987e444482e29fde28bd78f015129ddf44a not found: ID does not exist" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.599158 4947 scope.go:117] "RemoveContainer" containerID="de4c2ecf8eb9a56a9d6154251e9f5d91fab263e83e80b6d2c8953f5975d31272" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.600547 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6cb4bd4595-qvv49"] Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.606676 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6cb4bd4595-qvv49"] Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.615464 4947 scope.go:117] "RemoveContainer" containerID="de4c2ecf8eb9a56a9d6154251e9f5d91fab263e83e80b6d2c8953f5975d31272" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.616082 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7"] Jan 25 00:14:26 crc kubenswrapper[4947]: E0125 00:14:26.616263 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de4c2ecf8eb9a56a9d6154251e9f5d91fab263e83e80b6d2c8953f5975d31272\": container with ID starting with de4c2ecf8eb9a56a9d6154251e9f5d91fab263e83e80b6d2c8953f5975d31272 not found: ID does not exist" containerID="de4c2ecf8eb9a56a9d6154251e9f5d91fab263e83e80b6d2c8953f5975d31272" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.616320 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de4c2ecf8eb9a56a9d6154251e9f5d91fab263e83e80b6d2c8953f5975d31272"} err="failed to get container status \"de4c2ecf8eb9a56a9d6154251e9f5d91fab263e83e80b6d2c8953f5975d31272\": rpc error: code = NotFound desc = could not find container \"de4c2ecf8eb9a56a9d6154251e9f5d91fab263e83e80b6d2c8953f5975d31272\": container with ID starting with de4c2ecf8eb9a56a9d6154251e9f5d91fab263e83e80b6d2c8953f5975d31272 not found: ID does not exist" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.620642 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7"] Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.100877 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="224644f1-a5e3-4fa5-8c1c-97030c1796c5" path="/var/lib/kubelet/pods/224644f1-a5e3-4fa5-8c1c-97030c1796c5/volumes" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.102089 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38b86fbe-129d-444e-b66d-d5cfcdfe502d" path="/var/lib/kubelet/pods/38b86fbe-129d-444e-b66d-d5cfcdfe502d/volumes" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.262067 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf"] Jan 25 00:14:27 crc kubenswrapper[4947]: E0125 00:14:27.262960 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b86fbe-129d-444e-b66d-d5cfcdfe502d" containerName="controller-manager" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.262998 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b86fbe-129d-444e-b66d-d5cfcdfe502d" containerName="controller-manager" Jan 25 00:14:27 crc kubenswrapper[4947]: E0125 00:14:27.263035 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="224644f1-a5e3-4fa5-8c1c-97030c1796c5" containerName="route-controller-manager" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.263052 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="224644f1-a5e3-4fa5-8c1c-97030c1796c5" containerName="route-controller-manager" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.263277 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="224644f1-a5e3-4fa5-8c1c-97030c1796c5" containerName="route-controller-manager" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.263305 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="38b86fbe-129d-444e-b66d-d5cfcdfe502d" containerName="controller-manager" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.263954 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.267000 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.267381 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.267757 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.268335 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.268418 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.268469 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.272929 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc"] Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.274191 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.276983 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.279176 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.279366 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.279581 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.279646 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.280025 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.282906 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf"] Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.292070 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc"] Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.292183 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-client-ca\") pod \"route-controller-manager-5c8f48f598-mnljf\" (UID: \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\") " pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.292248 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02b69aab-7cbd-4f58-8756-c1c5b615c33d-proxy-ca-bundles\") pod \"controller-manager-5b9d4449c6-5zzzc\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.292310 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02b69aab-7cbd-4f58-8756-c1c5b615c33d-client-ca\") pod \"controller-manager-5b9d4449c6-5zzzc\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.292352 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02b69aab-7cbd-4f58-8756-c1c5b615c33d-config\") pod \"controller-manager-5b9d4449c6-5zzzc\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.292446 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mscq\" (UniqueName: \"kubernetes.io/projected/02b69aab-7cbd-4f58-8756-c1c5b615c33d-kube-api-access-8mscq\") pod \"controller-manager-5b9d4449c6-5zzzc\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.292617 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf76f\" (UniqueName: \"kubernetes.io/projected/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-kube-api-access-bf76f\") pod \"route-controller-manager-5c8f48f598-mnljf\" (UID: \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\") " pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.292723 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02b69aab-7cbd-4f58-8756-c1c5b615c33d-serving-cert\") pod \"controller-manager-5b9d4449c6-5zzzc\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.292793 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-serving-cert\") pod \"route-controller-manager-5c8f48f598-mnljf\" (UID: \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\") " pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.292874 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-config\") pod \"route-controller-manager-5c8f48f598-mnljf\" (UID: \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\") " pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.298909 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.393333 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-config\") pod \"route-controller-manager-5c8f48f598-mnljf\" (UID: \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\") " pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.393393 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-client-ca\") pod \"route-controller-manager-5c8f48f598-mnljf\" (UID: \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\") " pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.393426 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02b69aab-7cbd-4f58-8756-c1c5b615c33d-proxy-ca-bundles\") pod \"controller-manager-5b9d4449c6-5zzzc\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.393463 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02b69aab-7cbd-4f58-8756-c1c5b615c33d-client-ca\") pod \"controller-manager-5b9d4449c6-5zzzc\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.393487 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02b69aab-7cbd-4f58-8756-c1c5b615c33d-config\") pod \"controller-manager-5b9d4449c6-5zzzc\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.393510 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mscq\" (UniqueName: \"kubernetes.io/projected/02b69aab-7cbd-4f58-8756-c1c5b615c33d-kube-api-access-8mscq\") pod \"controller-manager-5b9d4449c6-5zzzc\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.393537 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf76f\" (UniqueName: \"kubernetes.io/projected/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-kube-api-access-bf76f\") pod \"route-controller-manager-5c8f48f598-mnljf\" (UID: \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\") " pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.393593 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02b69aab-7cbd-4f58-8756-c1c5b615c33d-serving-cert\") pod \"controller-manager-5b9d4449c6-5zzzc\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.393618 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-serving-cert\") pod \"route-controller-manager-5c8f48f598-mnljf\" (UID: \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\") " pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.394805 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-config\") pod \"route-controller-manager-5c8f48f598-mnljf\" (UID: \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\") " pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.395341 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02b69aab-7cbd-4f58-8756-c1c5b615c33d-proxy-ca-bundles\") pod \"controller-manager-5b9d4449c6-5zzzc\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.395927 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-client-ca\") pod \"route-controller-manager-5c8f48f598-mnljf\" (UID: \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\") " pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.396002 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02b69aab-7cbd-4f58-8756-c1c5b615c33d-config\") pod \"controller-manager-5b9d4449c6-5zzzc\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.396715 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02b69aab-7cbd-4f58-8756-c1c5b615c33d-client-ca\") pod \"controller-manager-5b9d4449c6-5zzzc\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.402817 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02b69aab-7cbd-4f58-8756-c1c5b615c33d-serving-cert\") pod \"controller-manager-5b9d4449c6-5zzzc\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.403996 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-serving-cert\") pod \"route-controller-manager-5c8f48f598-mnljf\" (UID: \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\") " pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.424969 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf76f\" (UniqueName: \"kubernetes.io/projected/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-kube-api-access-bf76f\") pod \"route-controller-manager-5c8f48f598-mnljf\" (UID: \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\") " pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.427987 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mscq\" (UniqueName: \"kubernetes.io/projected/02b69aab-7cbd-4f58-8756-c1c5b615c33d-kube-api-access-8mscq\") pod \"controller-manager-5b9d4449c6-5zzzc\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.593860 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.619218 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.868242 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc"] Jan 25 00:14:27 crc kubenswrapper[4947]: W0125 00:14:27.873784 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02b69aab_7cbd_4f58_8756_c1c5b615c33d.slice/crio-38a5c9cb609e714547362a4cc611ea212282d2141fd9223ceadf053b5b3b5a11 WatchSource:0}: Error finding container 38a5c9cb609e714547362a4cc611ea212282d2141fd9223ceadf053b5b3b5a11: Status 404 returned error can't find the container with id 38a5c9cb609e714547362a4cc611ea212282d2141fd9223ceadf053b5b3b5a11 Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.910268 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf"] Jan 25 00:14:27 crc kubenswrapper[4947]: W0125 00:14:27.915147 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod563d5ba6_16c8_4a0e_8f80_6e0f76eac00e.slice/crio-501e260a4d828269ffd7f38ce4c58069d90dbc9c7273eb6c4c277594a413682e WatchSource:0}: Error finding container 501e260a4d828269ffd7f38ce4c58069d90dbc9c7273eb6c4c277594a413682e: Status 404 returned error can't find the container with id 501e260a4d828269ffd7f38ce4c58069d90dbc9c7273eb6c4c277594a413682e Jan 25 00:14:28 crc kubenswrapper[4947]: I0125 00:14:28.576317 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" event={"ID":"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e","Type":"ContainerStarted","Data":"849cd7d5924c30ac3f086277a35aef4c9b05af412e8914f3d23a08bcce0b23fc"} Jan 25 00:14:28 crc kubenswrapper[4947]: I0125 00:14:28.576703 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" Jan 25 00:14:28 crc kubenswrapper[4947]: I0125 00:14:28.576720 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" event={"ID":"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e","Type":"ContainerStarted","Data":"501e260a4d828269ffd7f38ce4c58069d90dbc9c7273eb6c4c277594a413682e"} Jan 25 00:14:28 crc kubenswrapper[4947]: I0125 00:14:28.577936 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" event={"ID":"02b69aab-7cbd-4f58-8756-c1c5b615c33d","Type":"ContainerStarted","Data":"d0735a74ff405108e89ba77cc47ad07d34a08e6bc5aebf4ae849139038dfd8ea"} Jan 25 00:14:28 crc kubenswrapper[4947]: I0125 00:14:28.577967 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" event={"ID":"02b69aab-7cbd-4f58-8756-c1c5b615c33d","Type":"ContainerStarted","Data":"38a5c9cb609e714547362a4cc611ea212282d2141fd9223ceadf053b5b3b5a11"} Jan 25 00:14:28 crc kubenswrapper[4947]: I0125 00:14:28.578202 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:14:28 crc kubenswrapper[4947]: I0125 00:14:28.582837 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:14:28 crc kubenswrapper[4947]: I0125 00:14:28.593926 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" podStartSLOduration=3.593909113 podStartE2EDuration="3.593909113s" podCreationTimestamp="2026-01-25 00:14:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:14:28.59380854 +0000 UTC m=+307.826799000" watchObservedRunningTime="2026-01-25 00:14:28.593909113 +0000 UTC m=+307.826899553" Jan 25 00:14:28 crc kubenswrapper[4947]: I0125 00:14:28.614032 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" podStartSLOduration=3.614011778 podStartE2EDuration="3.614011778s" podCreationTimestamp="2026-01-25 00:14:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:14:28.610755314 +0000 UTC m=+307.843745754" watchObservedRunningTime="2026-01-25 00:14:28.614011778 +0000 UTC m=+307.847002218" Jan 25 00:14:28 crc kubenswrapper[4947]: I0125 00:14:28.623293 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" Jan 25 00:14:45 crc kubenswrapper[4947]: I0125 00:14:45.767251 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf"] Jan 25 00:14:45 crc kubenswrapper[4947]: I0125 00:14:45.769507 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" podUID="563d5ba6-16c8-4a0e-8f80-6e0f76eac00e" containerName="route-controller-manager" containerID="cri-o://849cd7d5924c30ac3f086277a35aef4c9b05af412e8914f3d23a08bcce0b23fc" gracePeriod=30 Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.282030 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.393493 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-client-ca\") pod \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\" (UID: \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\") " Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.393988 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf76f\" (UniqueName: \"kubernetes.io/projected/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-kube-api-access-bf76f\") pod \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\" (UID: \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\") " Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.394047 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-config\") pod \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\" (UID: \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\") " Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.394168 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-serving-cert\") pod \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\" (UID: \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\") " Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.394976 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-client-ca" (OuterVolumeSpecName: "client-ca") pod "563d5ba6-16c8-4a0e-8f80-6e0f76eac00e" (UID: "563d5ba6-16c8-4a0e-8f80-6e0f76eac00e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.395035 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-config" (OuterVolumeSpecName: "config") pod "563d5ba6-16c8-4a0e-8f80-6e0f76eac00e" (UID: "563d5ba6-16c8-4a0e-8f80-6e0f76eac00e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.399641 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "563d5ba6-16c8-4a0e-8f80-6e0f76eac00e" (UID: "563d5ba6-16c8-4a0e-8f80-6e0f76eac00e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.399964 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-kube-api-access-bf76f" (OuterVolumeSpecName: "kube-api-access-bf76f") pod "563d5ba6-16c8-4a0e-8f80-6e0f76eac00e" (UID: "563d5ba6-16c8-4a0e-8f80-6e0f76eac00e"). InnerVolumeSpecName "kube-api-access-bf76f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.495728 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.495776 4947 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.495787 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf76f\" (UniqueName: \"kubernetes.io/projected/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-kube-api-access-bf76f\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.495796 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.696217 4947 generic.go:334] "Generic (PLEG): container finished" podID="563d5ba6-16c8-4a0e-8f80-6e0f76eac00e" containerID="849cd7d5924c30ac3f086277a35aef4c9b05af412e8914f3d23a08bcce0b23fc" exitCode=0 Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.696278 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" event={"ID":"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e","Type":"ContainerDied","Data":"849cd7d5924c30ac3f086277a35aef4c9b05af412e8914f3d23a08bcce0b23fc"} Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.696291 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.696320 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" event={"ID":"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e","Type":"ContainerDied","Data":"501e260a4d828269ffd7f38ce4c58069d90dbc9c7273eb6c4c277594a413682e"} Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.696346 4947 scope.go:117] "RemoveContainer" containerID="849cd7d5924c30ac3f086277a35aef4c9b05af412e8914f3d23a08bcce0b23fc" Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.724649 4947 scope.go:117] "RemoveContainer" containerID="849cd7d5924c30ac3f086277a35aef4c9b05af412e8914f3d23a08bcce0b23fc" Jan 25 00:14:46 crc kubenswrapper[4947]: E0125 00:14:46.725172 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"849cd7d5924c30ac3f086277a35aef4c9b05af412e8914f3d23a08bcce0b23fc\": container with ID starting with 849cd7d5924c30ac3f086277a35aef4c9b05af412e8914f3d23a08bcce0b23fc not found: ID does not exist" containerID="849cd7d5924c30ac3f086277a35aef4c9b05af412e8914f3d23a08bcce0b23fc" Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.725203 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"849cd7d5924c30ac3f086277a35aef4c9b05af412e8914f3d23a08bcce0b23fc"} err="failed to get container status \"849cd7d5924c30ac3f086277a35aef4c9b05af412e8914f3d23a08bcce0b23fc\": rpc error: code = NotFound desc = could not find container \"849cd7d5924c30ac3f086277a35aef4c9b05af412e8914f3d23a08bcce0b23fc\": container with ID starting with 849cd7d5924c30ac3f086277a35aef4c9b05af412e8914f3d23a08bcce0b23fc not found: ID does not exist" Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.732719 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf"] Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.741752 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf"] Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.072928 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.073012 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.102833 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="563d5ba6-16c8-4a0e-8f80-6e0f76eac00e" path="/var/lib/kubelet/pods/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e/volumes" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.295298 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c"] Jan 25 00:14:47 crc kubenswrapper[4947]: E0125 00:14:47.295698 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="563d5ba6-16c8-4a0e-8f80-6e0f76eac00e" containerName="route-controller-manager" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.295727 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="563d5ba6-16c8-4a0e-8f80-6e0f76eac00e" containerName="route-controller-manager" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.296001 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="563d5ba6-16c8-4a0e-8f80-6e0f76eac00e" containerName="route-controller-manager" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.296772 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.299295 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.300115 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.300343 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.303362 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.307605 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.307741 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.310990 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c"] Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.409361 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpz7q\" (UniqueName: \"kubernetes.io/projected/13a2325d-775c-43ea-8a53-3011854a5878-kube-api-access-wpz7q\") pod \"route-controller-manager-69f6879854-z2b5c\" (UID: \"13a2325d-775c-43ea-8a53-3011854a5878\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.409512 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13a2325d-775c-43ea-8a53-3011854a5878-client-ca\") pod \"route-controller-manager-69f6879854-z2b5c\" (UID: \"13a2325d-775c-43ea-8a53-3011854a5878\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.410038 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a2325d-775c-43ea-8a53-3011854a5878-config\") pod \"route-controller-manager-69f6879854-z2b5c\" (UID: \"13a2325d-775c-43ea-8a53-3011854a5878\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.410117 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13a2325d-775c-43ea-8a53-3011854a5878-serving-cert\") pod \"route-controller-manager-69f6879854-z2b5c\" (UID: \"13a2325d-775c-43ea-8a53-3011854a5878\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.511404 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13a2325d-775c-43ea-8a53-3011854a5878-client-ca\") pod \"route-controller-manager-69f6879854-z2b5c\" (UID: \"13a2325d-775c-43ea-8a53-3011854a5878\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.511506 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a2325d-775c-43ea-8a53-3011854a5878-config\") pod \"route-controller-manager-69f6879854-z2b5c\" (UID: \"13a2325d-775c-43ea-8a53-3011854a5878\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.511564 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13a2325d-775c-43ea-8a53-3011854a5878-serving-cert\") pod \"route-controller-manager-69f6879854-z2b5c\" (UID: \"13a2325d-775c-43ea-8a53-3011854a5878\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.511664 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpz7q\" (UniqueName: \"kubernetes.io/projected/13a2325d-775c-43ea-8a53-3011854a5878-kube-api-access-wpz7q\") pod \"route-controller-manager-69f6879854-z2b5c\" (UID: \"13a2325d-775c-43ea-8a53-3011854a5878\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.513042 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13a2325d-775c-43ea-8a53-3011854a5878-client-ca\") pod \"route-controller-manager-69f6879854-z2b5c\" (UID: \"13a2325d-775c-43ea-8a53-3011854a5878\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.513195 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a2325d-775c-43ea-8a53-3011854a5878-config\") pod \"route-controller-manager-69f6879854-z2b5c\" (UID: \"13a2325d-775c-43ea-8a53-3011854a5878\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.524071 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13a2325d-775c-43ea-8a53-3011854a5878-serving-cert\") pod \"route-controller-manager-69f6879854-z2b5c\" (UID: \"13a2325d-775c-43ea-8a53-3011854a5878\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.559960 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpz7q\" (UniqueName: \"kubernetes.io/projected/13a2325d-775c-43ea-8a53-3011854a5878-kube-api-access-wpz7q\") pod \"route-controller-manager-69f6879854-z2b5c\" (UID: \"13a2325d-775c-43ea-8a53-3011854a5878\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.620790 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c" Jan 25 00:14:48 crc kubenswrapper[4947]: I0125 00:14:48.066225 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c"] Jan 25 00:14:48 crc kubenswrapper[4947]: I0125 00:14:48.711879 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c" event={"ID":"13a2325d-775c-43ea-8a53-3011854a5878","Type":"ContainerStarted","Data":"4f7167e8bc56ea2015f74a4a9155f08d29d52a53ff9dabdf6b3467b80213d58c"} Jan 25 00:14:48 crc kubenswrapper[4947]: I0125 00:14:48.712267 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c" Jan 25 00:14:48 crc kubenswrapper[4947]: I0125 00:14:48.712284 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c" event={"ID":"13a2325d-775c-43ea-8a53-3011854a5878","Type":"ContainerStarted","Data":"c493ad91366d94153354e4bd54c75ea5dd6f48d5a543429af74426b61243ed0b"} Jan 25 00:14:48 crc kubenswrapper[4947]: I0125 00:14:48.722332 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c" Jan 25 00:14:48 crc kubenswrapper[4947]: I0125 00:14:48.733221 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c" podStartSLOduration=3.733197864 podStartE2EDuration="3.733197864s" podCreationTimestamp="2026-01-25 00:14:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:14:48.729805707 +0000 UTC m=+327.962796197" watchObservedRunningTime="2026-01-25 00:14:48.733197864 +0000 UTC m=+327.966188354" Jan 25 00:15:00 crc kubenswrapper[4947]: I0125 00:15:00.176989 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488335-hfmmn"] Jan 25 00:15:00 crc kubenswrapper[4947]: I0125 00:15:00.178704 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488335-hfmmn" Jan 25 00:15:00 crc kubenswrapper[4947]: I0125 00:15:00.180674 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 25 00:15:00 crc kubenswrapper[4947]: I0125 00:15:00.184595 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 25 00:15:00 crc kubenswrapper[4947]: I0125 00:15:00.191720 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488335-hfmmn"] Jan 25 00:15:00 crc kubenswrapper[4947]: I0125 00:15:00.285527 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea9619f3-0314-493b-8fac-ab4d927cb2be-secret-volume\") pod \"collect-profiles-29488335-hfmmn\" (UID: \"ea9619f3-0314-493b-8fac-ab4d927cb2be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488335-hfmmn" Jan 25 00:15:00 crc kubenswrapper[4947]: I0125 00:15:00.285827 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea9619f3-0314-493b-8fac-ab4d927cb2be-config-volume\") pod \"collect-profiles-29488335-hfmmn\" (UID: \"ea9619f3-0314-493b-8fac-ab4d927cb2be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488335-hfmmn" Jan 25 00:15:00 crc kubenswrapper[4947]: I0125 00:15:00.285965 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxks6\" (UniqueName: \"kubernetes.io/projected/ea9619f3-0314-493b-8fac-ab4d927cb2be-kube-api-access-hxks6\") pod \"collect-profiles-29488335-hfmmn\" (UID: \"ea9619f3-0314-493b-8fac-ab4d927cb2be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488335-hfmmn" Jan 25 00:15:00 crc kubenswrapper[4947]: I0125 00:15:00.386854 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxks6\" (UniqueName: \"kubernetes.io/projected/ea9619f3-0314-493b-8fac-ab4d927cb2be-kube-api-access-hxks6\") pod \"collect-profiles-29488335-hfmmn\" (UID: \"ea9619f3-0314-493b-8fac-ab4d927cb2be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488335-hfmmn" Jan 25 00:15:00 crc kubenswrapper[4947]: I0125 00:15:00.386918 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea9619f3-0314-493b-8fac-ab4d927cb2be-secret-volume\") pod \"collect-profiles-29488335-hfmmn\" (UID: \"ea9619f3-0314-493b-8fac-ab4d927cb2be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488335-hfmmn" Jan 25 00:15:00 crc kubenswrapper[4947]: I0125 00:15:00.386948 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea9619f3-0314-493b-8fac-ab4d927cb2be-config-volume\") pod \"collect-profiles-29488335-hfmmn\" (UID: \"ea9619f3-0314-493b-8fac-ab4d927cb2be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488335-hfmmn" Jan 25 00:15:00 crc kubenswrapper[4947]: I0125 00:15:00.387912 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea9619f3-0314-493b-8fac-ab4d927cb2be-config-volume\") pod \"collect-profiles-29488335-hfmmn\" (UID: \"ea9619f3-0314-493b-8fac-ab4d927cb2be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488335-hfmmn" Jan 25 00:15:00 crc kubenswrapper[4947]: I0125 00:15:00.395787 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea9619f3-0314-493b-8fac-ab4d927cb2be-secret-volume\") pod \"collect-profiles-29488335-hfmmn\" (UID: \"ea9619f3-0314-493b-8fac-ab4d927cb2be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488335-hfmmn" Jan 25 00:15:00 crc kubenswrapper[4947]: I0125 00:15:00.403515 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxks6\" (UniqueName: \"kubernetes.io/projected/ea9619f3-0314-493b-8fac-ab4d927cb2be-kube-api-access-hxks6\") pod \"collect-profiles-29488335-hfmmn\" (UID: \"ea9619f3-0314-493b-8fac-ab4d927cb2be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488335-hfmmn" Jan 25 00:15:00 crc kubenswrapper[4947]: I0125 00:15:00.505254 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488335-hfmmn" Jan 25 00:15:00 crc kubenswrapper[4947]: I0125 00:15:00.921885 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488335-hfmmn"] Jan 25 00:15:00 crc kubenswrapper[4947]: W0125 00:15:00.931251 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea9619f3_0314_493b_8fac_ab4d927cb2be.slice/crio-475d383198c34fac1d7b4be7345f00f6e1ed79707f478a21b19eca643f099541 WatchSource:0}: Error finding container 475d383198c34fac1d7b4be7345f00f6e1ed79707f478a21b19eca643f099541: Status 404 returned error can't find the container with id 475d383198c34fac1d7b4be7345f00f6e1ed79707f478a21b19eca643f099541 Jan 25 00:15:01 crc kubenswrapper[4947]: I0125 00:15:01.797985 4947 generic.go:334] "Generic (PLEG): container finished" podID="ea9619f3-0314-493b-8fac-ab4d927cb2be" containerID="24c5fcd7011a72bae7e895ae2163482a8b930148e6544cfb54bf6c32060f0397" exitCode=0 Jan 25 00:15:01 crc kubenswrapper[4947]: I0125 00:15:01.798121 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488335-hfmmn" event={"ID":"ea9619f3-0314-493b-8fac-ab4d927cb2be","Type":"ContainerDied","Data":"24c5fcd7011a72bae7e895ae2163482a8b930148e6544cfb54bf6c32060f0397"} Jan 25 00:15:01 crc kubenswrapper[4947]: I0125 00:15:01.798590 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488335-hfmmn" event={"ID":"ea9619f3-0314-493b-8fac-ab4d927cb2be","Type":"ContainerStarted","Data":"475d383198c34fac1d7b4be7345f00f6e1ed79707f478a21b19eca643f099541"} Jan 25 00:15:03 crc kubenswrapper[4947]: I0125 00:15:03.224837 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488335-hfmmn" Jan 25 00:15:03 crc kubenswrapper[4947]: I0125 00:15:03.333457 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea9619f3-0314-493b-8fac-ab4d927cb2be-config-volume\") pod \"ea9619f3-0314-493b-8fac-ab4d927cb2be\" (UID: \"ea9619f3-0314-493b-8fac-ab4d927cb2be\") " Jan 25 00:15:03 crc kubenswrapper[4947]: I0125 00:15:03.333612 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea9619f3-0314-493b-8fac-ab4d927cb2be-secret-volume\") pod \"ea9619f3-0314-493b-8fac-ab4d927cb2be\" (UID: \"ea9619f3-0314-493b-8fac-ab4d927cb2be\") " Jan 25 00:15:03 crc kubenswrapper[4947]: I0125 00:15:03.333771 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxks6\" (UniqueName: \"kubernetes.io/projected/ea9619f3-0314-493b-8fac-ab4d927cb2be-kube-api-access-hxks6\") pod \"ea9619f3-0314-493b-8fac-ab4d927cb2be\" (UID: \"ea9619f3-0314-493b-8fac-ab4d927cb2be\") " Jan 25 00:15:03 crc kubenswrapper[4947]: I0125 00:15:03.335398 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea9619f3-0314-493b-8fac-ab4d927cb2be-config-volume" (OuterVolumeSpecName: "config-volume") pod "ea9619f3-0314-493b-8fac-ab4d927cb2be" (UID: "ea9619f3-0314-493b-8fac-ab4d927cb2be"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:15:03 crc kubenswrapper[4947]: I0125 00:15:03.342783 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea9619f3-0314-493b-8fac-ab4d927cb2be-kube-api-access-hxks6" (OuterVolumeSpecName: "kube-api-access-hxks6") pod "ea9619f3-0314-493b-8fac-ab4d927cb2be" (UID: "ea9619f3-0314-493b-8fac-ab4d927cb2be"). InnerVolumeSpecName "kube-api-access-hxks6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:15:03 crc kubenswrapper[4947]: I0125 00:15:03.343295 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea9619f3-0314-493b-8fac-ab4d927cb2be-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ea9619f3-0314-493b-8fac-ab4d927cb2be" (UID: "ea9619f3-0314-493b-8fac-ab4d927cb2be"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:15:03 crc kubenswrapper[4947]: I0125 00:15:03.435742 4947 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea9619f3-0314-493b-8fac-ab4d927cb2be-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:03 crc kubenswrapper[4947]: I0125 00:15:03.435802 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxks6\" (UniqueName: \"kubernetes.io/projected/ea9619f3-0314-493b-8fac-ab4d927cb2be-kube-api-access-hxks6\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:03 crc kubenswrapper[4947]: I0125 00:15:03.435818 4947 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea9619f3-0314-493b-8fac-ab4d927cb2be-config-volume\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:03 crc kubenswrapper[4947]: I0125 00:15:03.816073 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488335-hfmmn" event={"ID":"ea9619f3-0314-493b-8fac-ab4d927cb2be","Type":"ContainerDied","Data":"475d383198c34fac1d7b4be7345f00f6e1ed79707f478a21b19eca643f099541"} Jan 25 00:15:03 crc kubenswrapper[4947]: I0125 00:15:03.816172 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488335-hfmmn" Jan 25 00:15:03 crc kubenswrapper[4947]: I0125 00:15:03.816198 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="475d383198c34fac1d7b4be7345f00f6e1ed79707f478a21b19eca643f099541" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.643625 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-bh4mm"] Jan 25 00:15:10 crc kubenswrapper[4947]: E0125 00:15:10.644412 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea9619f3-0314-493b-8fac-ab4d927cb2be" containerName="collect-profiles" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.644426 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea9619f3-0314-493b-8fac-ab4d927cb2be" containerName="collect-profiles" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.644538 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea9619f3-0314-493b-8fac-ab4d927cb2be" containerName="collect-profiles" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.644946 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.660386 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-bh4mm"] Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.750771 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.750838 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/71aa1854-f5dd-4aef-a80a-0121225c19d8-registry-tls\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.750868 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/71aa1854-f5dd-4aef-a80a-0121225c19d8-trusted-ca\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.750892 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/71aa1854-f5dd-4aef-a80a-0121225c19d8-ca-trust-extracted\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.750925 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp5hl\" (UniqueName: \"kubernetes.io/projected/71aa1854-f5dd-4aef-a80a-0121225c19d8-kube-api-access-fp5hl\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.750944 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/71aa1854-f5dd-4aef-a80a-0121225c19d8-registry-certificates\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.750982 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/71aa1854-f5dd-4aef-a80a-0121225c19d8-bound-sa-token\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.751017 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/71aa1854-f5dd-4aef-a80a-0121225c19d8-installation-pull-secrets\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.802153 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.852185 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/71aa1854-f5dd-4aef-a80a-0121225c19d8-ca-trust-extracted\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.852262 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp5hl\" (UniqueName: \"kubernetes.io/projected/71aa1854-f5dd-4aef-a80a-0121225c19d8-kube-api-access-fp5hl\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.852286 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/71aa1854-f5dd-4aef-a80a-0121225c19d8-registry-certificates\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.852321 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/71aa1854-f5dd-4aef-a80a-0121225c19d8-bound-sa-token\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.852335 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/71aa1854-f5dd-4aef-a80a-0121225c19d8-installation-pull-secrets\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.852375 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/71aa1854-f5dd-4aef-a80a-0121225c19d8-registry-tls\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.852392 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/71aa1854-f5dd-4aef-a80a-0121225c19d8-trusted-ca\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.853268 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/71aa1854-f5dd-4aef-a80a-0121225c19d8-ca-trust-extracted\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.854346 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/71aa1854-f5dd-4aef-a80a-0121225c19d8-registry-certificates\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.854378 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/71aa1854-f5dd-4aef-a80a-0121225c19d8-trusted-ca\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.869051 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/71aa1854-f5dd-4aef-a80a-0121225c19d8-installation-pull-secrets\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.872080 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp5hl\" (UniqueName: \"kubernetes.io/projected/71aa1854-f5dd-4aef-a80a-0121225c19d8-kube-api-access-fp5hl\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.873878 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/71aa1854-f5dd-4aef-a80a-0121225c19d8-bound-sa-token\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.874193 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/71aa1854-f5dd-4aef-a80a-0121225c19d8-registry-tls\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.971844 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:11 crc kubenswrapper[4947]: I0125 00:15:11.491868 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-bh4mm"] Jan 25 00:15:11 crc kubenswrapper[4947]: I0125 00:15:11.879895 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" event={"ID":"71aa1854-f5dd-4aef-a80a-0121225c19d8","Type":"ContainerStarted","Data":"809fda81fa0dcf1ee74bb5b7f70fc3d1f84e5139fa0f867f993b60eab1295fdc"} Jan 25 00:15:11 crc kubenswrapper[4947]: I0125 00:15:11.879938 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" event={"ID":"71aa1854-f5dd-4aef-a80a-0121225c19d8","Type":"ContainerStarted","Data":"d76c6cee3736aac6763294b3f629534a4a8f55d33317f67b29ab01e0c2161eb4"} Jan 25 00:15:11 crc kubenswrapper[4947]: I0125 00:15:11.880141 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:11 crc kubenswrapper[4947]: I0125 00:15:11.900749 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" podStartSLOduration=1.900730814 podStartE2EDuration="1.900730814s" podCreationTimestamp="2026-01-25 00:15:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:15:11.895832167 +0000 UTC m=+351.128822627" watchObservedRunningTime="2026-01-25 00:15:11.900730814 +0000 UTC m=+351.133721264" Jan 25 00:15:17 crc kubenswrapper[4947]: I0125 00:15:17.072661 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:15:17 crc kubenswrapper[4947]: I0125 00:15:17.073244 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.244585 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qzj76"] Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.245552 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qzj76" podUID="900aeb01-050c-45b8-936c-e5f8d73ebeb5" containerName="registry-server" containerID="cri-o://625f0cef66386e28287f85cd89001a6182e1a9e32fef13144f9306b8fd3bf8e1" gracePeriod=30 Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.261973 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-47m2l"] Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.262699 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-47m2l" podUID="ad96bcad-395b-4844-9992-00acdf7436c2" containerName="registry-server" containerID="cri-o://d9ef5b68bf9e0e13efe5f814ee090df8faa653dec0c7b54fa9b8283ea6f30289" gracePeriod=30 Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.283316 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vx9fn"] Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.289269 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wwwnp"] Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.289574 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" podUID="fa35d682-53d6-4191-9cf9-f48b9f74e858" containerName="marketplace-operator" containerID="cri-o://c76cf10a240dd3e6abf0e6c61fc65ab4c06d545e02cbd13850e4b3a1505a6b8e" gracePeriod=30 Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.289735 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wwwnp" podUID="06282146-8047-4104-b189-c896e5b7f8b9" containerName="registry-server" containerID="cri-o://8115b93e0fd4495a2b68325eacc2bd399adaeffd03e7ffbff447c266b62f3c8b" gracePeriod=30 Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.315690 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ltw77"] Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.316029 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ltw77" podUID="49263faf-29f4-481c-aafd-a271a29c209a" containerName="registry-server" containerID="cri-o://482fba3b03a69741fd55f9c49368ba133c9fd2f319ef4e1223d2e1af6bddd6d3" gracePeriod=30 Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.323655 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mbj6z"] Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.324408 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mbj6z" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.335728 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mbj6z"] Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.463965 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/94a09856-1120-4003-a601-ee3c9121eb51-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mbj6z\" (UID: \"94a09856-1120-4003-a601-ee3c9121eb51\") " pod="openshift-marketplace/marketplace-operator-79b997595-mbj6z" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.464017 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/94a09856-1120-4003-a601-ee3c9121eb51-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mbj6z\" (UID: \"94a09856-1120-4003-a601-ee3c9121eb51\") " pod="openshift-marketplace/marketplace-operator-79b997595-mbj6z" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.464049 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpnjk\" (UniqueName: \"kubernetes.io/projected/94a09856-1120-4003-a601-ee3c9121eb51-kube-api-access-kpnjk\") pod \"marketplace-operator-79b997595-mbj6z\" (UID: \"94a09856-1120-4003-a601-ee3c9121eb51\") " pod="openshift-marketplace/marketplace-operator-79b997595-mbj6z" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.565014 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/94a09856-1120-4003-a601-ee3c9121eb51-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mbj6z\" (UID: \"94a09856-1120-4003-a601-ee3c9121eb51\") " pod="openshift-marketplace/marketplace-operator-79b997595-mbj6z" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.565073 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpnjk\" (UniqueName: \"kubernetes.io/projected/94a09856-1120-4003-a601-ee3c9121eb51-kube-api-access-kpnjk\") pod \"marketplace-operator-79b997595-mbj6z\" (UID: \"94a09856-1120-4003-a601-ee3c9121eb51\") " pod="openshift-marketplace/marketplace-operator-79b997595-mbj6z" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.565196 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/94a09856-1120-4003-a601-ee3c9121eb51-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mbj6z\" (UID: \"94a09856-1120-4003-a601-ee3c9121eb51\") " pod="openshift-marketplace/marketplace-operator-79b997595-mbj6z" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.566816 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/94a09856-1120-4003-a601-ee3c9121eb51-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mbj6z\" (UID: \"94a09856-1120-4003-a601-ee3c9121eb51\") " pod="openshift-marketplace/marketplace-operator-79b997595-mbj6z" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.575834 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/94a09856-1120-4003-a601-ee3c9121eb51-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mbj6z\" (UID: \"94a09856-1120-4003-a601-ee3c9121eb51\") " pod="openshift-marketplace/marketplace-operator-79b997595-mbj6z" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.583775 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpnjk\" (UniqueName: \"kubernetes.io/projected/94a09856-1120-4003-a601-ee3c9121eb51-kube-api-access-kpnjk\") pod \"marketplace-operator-79b997595-mbj6z\" (UID: \"94a09856-1120-4003-a601-ee3c9121eb51\") " pod="openshift-marketplace/marketplace-operator-79b997595-mbj6z" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.763905 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mbj6z" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.763969 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qzj76" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.812942 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-47m2l" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.826512 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.859152 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wwwnp" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.883869 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/900aeb01-050c-45b8-936c-e5f8d73ebeb5-catalog-content\") pod \"900aeb01-050c-45b8-936c-e5f8d73ebeb5\" (UID: \"900aeb01-050c-45b8-936c-e5f8d73ebeb5\") " Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.883916 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/900aeb01-050c-45b8-936c-e5f8d73ebeb5-utilities\") pod \"900aeb01-050c-45b8-936c-e5f8d73ebeb5\" (UID: \"900aeb01-050c-45b8-936c-e5f8d73ebeb5\") " Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.884530 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxzk6\" (UniqueName: \"kubernetes.io/projected/900aeb01-050c-45b8-936c-e5f8d73ebeb5-kube-api-access-vxzk6\") pod \"900aeb01-050c-45b8-936c-e5f8d73ebeb5\" (UID: \"900aeb01-050c-45b8-936c-e5f8d73ebeb5\") " Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.889502 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/900aeb01-050c-45b8-936c-e5f8d73ebeb5-utilities" (OuterVolumeSpecName: "utilities") pod "900aeb01-050c-45b8-936c-e5f8d73ebeb5" (UID: "900aeb01-050c-45b8-936c-e5f8d73ebeb5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.890148 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/900aeb01-050c-45b8-936c-e5f8d73ebeb5-kube-api-access-vxzk6" (OuterVolumeSpecName: "kube-api-access-vxzk6") pod "900aeb01-050c-45b8-936c-e5f8d73ebeb5" (UID: "900aeb01-050c-45b8-936c-e5f8d73ebeb5"). InnerVolumeSpecName "kube-api-access-vxzk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.925715 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ltw77" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.980096 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/900aeb01-050c-45b8-936c-e5f8d73ebeb5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "900aeb01-050c-45b8-936c-e5f8d73ebeb5" (UID: "900aeb01-050c-45b8-936c-e5f8d73ebeb5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.981760 4947 generic.go:334] "Generic (PLEG): container finished" podID="06282146-8047-4104-b189-c896e5b7f8b9" containerID="8115b93e0fd4495a2b68325eacc2bd399adaeffd03e7ffbff447c266b62f3c8b" exitCode=0 Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.981818 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wwwnp" event={"ID":"06282146-8047-4104-b189-c896e5b7f8b9","Type":"ContainerDied","Data":"8115b93e0fd4495a2b68325eacc2bd399adaeffd03e7ffbff447c266b62f3c8b"} Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.981868 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wwwnp" event={"ID":"06282146-8047-4104-b189-c896e5b7f8b9","Type":"ContainerDied","Data":"b331dec527a60132595158dee76520a26cd144ddc3aa45e156eaf1db6341fcb3"} Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.981888 4947 scope.go:117] "RemoveContainer" containerID="8115b93e0fd4495a2b68325eacc2bd399adaeffd03e7ffbff447c266b62f3c8b" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.982025 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wwwnp" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.986070 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06282146-8047-4104-b189-c896e5b7f8b9-catalog-content\") pod \"06282146-8047-4104-b189-c896e5b7f8b9\" (UID: \"06282146-8047-4104-b189-c896e5b7f8b9\") " Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.986299 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad96bcad-395b-4844-9992-00acdf7436c2-catalog-content\") pod \"ad96bcad-395b-4844-9992-00acdf7436c2\" (UID: \"ad96bcad-395b-4844-9992-00acdf7436c2\") " Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.986368 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad96bcad-395b-4844-9992-00acdf7436c2-utilities\") pod \"ad96bcad-395b-4844-9992-00acdf7436c2\" (UID: \"ad96bcad-395b-4844-9992-00acdf7436c2\") " Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.986401 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa35d682-53d6-4191-9cf9-f48b9f74e858-marketplace-trusted-ca\") pod \"fa35d682-53d6-4191-9cf9-f48b9f74e858\" (UID: \"fa35d682-53d6-4191-9cf9-f48b9f74e858\") " Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.986477 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhdwv\" (UniqueName: \"kubernetes.io/projected/ad96bcad-395b-4844-9992-00acdf7436c2-kube-api-access-fhdwv\") pod \"ad96bcad-395b-4844-9992-00acdf7436c2\" (UID: \"ad96bcad-395b-4844-9992-00acdf7436c2\") " Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.986543 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8th5h\" (UniqueName: \"kubernetes.io/projected/06282146-8047-4104-b189-c896e5b7f8b9-kube-api-access-8th5h\") pod \"06282146-8047-4104-b189-c896e5b7f8b9\" (UID: \"06282146-8047-4104-b189-c896e5b7f8b9\") " Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.986590 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxnl4\" (UniqueName: \"kubernetes.io/projected/fa35d682-53d6-4191-9cf9-f48b9f74e858-kube-api-access-sxnl4\") pod \"fa35d682-53d6-4191-9cf9-f48b9f74e858\" (UID: \"fa35d682-53d6-4191-9cf9-f48b9f74e858\") " Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.986621 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fa35d682-53d6-4191-9cf9-f48b9f74e858-marketplace-operator-metrics\") pod \"fa35d682-53d6-4191-9cf9-f48b9f74e858\" (UID: \"fa35d682-53d6-4191-9cf9-f48b9f74e858\") " Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.986648 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06282146-8047-4104-b189-c896e5b7f8b9-utilities\") pod \"06282146-8047-4104-b189-c896e5b7f8b9\" (UID: \"06282146-8047-4104-b189-c896e5b7f8b9\") " Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.986883 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxzk6\" (UniqueName: \"kubernetes.io/projected/900aeb01-050c-45b8-936c-e5f8d73ebeb5-kube-api-access-vxzk6\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.986902 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/900aeb01-050c-45b8-936c-e5f8d73ebeb5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.986914 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/900aeb01-050c-45b8-936c-e5f8d73ebeb5-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.987570 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad96bcad-395b-4844-9992-00acdf7436c2-utilities" (OuterVolumeSpecName: "utilities") pod "ad96bcad-395b-4844-9992-00acdf7436c2" (UID: "ad96bcad-395b-4844-9992-00acdf7436c2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.987622 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa35d682-53d6-4191-9cf9-f48b9f74e858-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "fa35d682-53d6-4191-9cf9-f48b9f74e858" (UID: "fa35d682-53d6-4191-9cf9-f48b9f74e858"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.988309 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06282146-8047-4104-b189-c896e5b7f8b9-utilities" (OuterVolumeSpecName: "utilities") pod "06282146-8047-4104-b189-c896e5b7f8b9" (UID: "06282146-8047-4104-b189-c896e5b7f8b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.991556 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa35d682-53d6-4191-9cf9-f48b9f74e858-kube-api-access-sxnl4" (OuterVolumeSpecName: "kube-api-access-sxnl4") pod "fa35d682-53d6-4191-9cf9-f48b9f74e858" (UID: "fa35d682-53d6-4191-9cf9-f48b9f74e858"). InnerVolumeSpecName "kube-api-access-sxnl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.992518 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa35d682-53d6-4191-9cf9-f48b9f74e858-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "fa35d682-53d6-4191-9cf9-f48b9f74e858" (UID: "fa35d682-53d6-4191-9cf9-f48b9f74e858"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.992540 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad96bcad-395b-4844-9992-00acdf7436c2-kube-api-access-fhdwv" (OuterVolumeSpecName: "kube-api-access-fhdwv") pod "ad96bcad-395b-4844-9992-00acdf7436c2" (UID: "ad96bcad-395b-4844-9992-00acdf7436c2"). InnerVolumeSpecName "kube-api-access-fhdwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.992869 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06282146-8047-4104-b189-c896e5b7f8b9-kube-api-access-8th5h" (OuterVolumeSpecName: "kube-api-access-8th5h") pod "06282146-8047-4104-b189-c896e5b7f8b9" (UID: "06282146-8047-4104-b189-c896e5b7f8b9"). InnerVolumeSpecName "kube-api-access-8th5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.994574 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qzj76" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.994650 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qzj76" event={"ID":"900aeb01-050c-45b8-936c-e5f8d73ebeb5","Type":"ContainerDied","Data":"625f0cef66386e28287f85cd89001a6182e1a9e32fef13144f9306b8fd3bf8e1"} Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.994442 4947 generic.go:334] "Generic (PLEG): container finished" podID="900aeb01-050c-45b8-936c-e5f8d73ebeb5" containerID="625f0cef66386e28287f85cd89001a6182e1a9e32fef13144f9306b8fd3bf8e1" exitCode=0 Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.995524 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qzj76" event={"ID":"900aeb01-050c-45b8-936c-e5f8d73ebeb5","Type":"ContainerDied","Data":"596449ceb20f31ed206815663af8903fec2583551204ec75b85d39be48c2895f"} Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.001225 4947 generic.go:334] "Generic (PLEG): container finished" podID="49263faf-29f4-481c-aafd-a271a29c209a" containerID="482fba3b03a69741fd55f9c49368ba133c9fd2f319ef4e1223d2e1af6bddd6d3" exitCode=0 Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.001297 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltw77" event={"ID":"49263faf-29f4-481c-aafd-a271a29c209a","Type":"ContainerDied","Data":"482fba3b03a69741fd55f9c49368ba133c9fd2f319ef4e1223d2e1af6bddd6d3"} Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.001323 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltw77" event={"ID":"49263faf-29f4-481c-aafd-a271a29c209a","Type":"ContainerDied","Data":"23655937ab043534ca01347d9a2964b60c41f2a6eae0705e6c094b13084701de"} Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.001389 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ltw77" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.006335 4947 scope.go:117] "RemoveContainer" containerID="9e14cc4195bf11e8c81096f8e93eb602df46809eac413c4e97d8f0951f263a3e" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.008931 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mbj6z"] Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.012109 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.012232 4947 generic.go:334] "Generic (PLEG): container finished" podID="fa35d682-53d6-4191-9cf9-f48b9f74e858" containerID="c76cf10a240dd3e6abf0e6c61fc65ab4c06d545e02cbd13850e4b3a1505a6b8e" exitCode=0 Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.014234 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" event={"ID":"fa35d682-53d6-4191-9cf9-f48b9f74e858","Type":"ContainerDied","Data":"c76cf10a240dd3e6abf0e6c61fc65ab4c06d545e02cbd13850e4b3a1505a6b8e"} Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.014774 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" event={"ID":"fa35d682-53d6-4191-9cf9-f48b9f74e858","Type":"ContainerDied","Data":"2742ff9aa7f3cbee3d8389c7f258cc4ce04fcb1e9943ebf713523dc12c66fb09"} Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.021485 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06282146-8047-4104-b189-c896e5b7f8b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "06282146-8047-4104-b189-c896e5b7f8b9" (UID: "06282146-8047-4104-b189-c896e5b7f8b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.024092 4947 scope.go:117] "RemoveContainer" containerID="b68921fda52bf0941dd3182cdd91346027905a40aad8abb0a9e34eef1346172c" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.026904 4947 generic.go:334] "Generic (PLEG): container finished" podID="ad96bcad-395b-4844-9992-00acdf7436c2" containerID="d9ef5b68bf9e0e13efe5f814ee090df8faa653dec0c7b54fa9b8283ea6f30289" exitCode=0 Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.027039 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-47m2l" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.027054 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47m2l" event={"ID":"ad96bcad-395b-4844-9992-00acdf7436c2","Type":"ContainerDied","Data":"d9ef5b68bf9e0e13efe5f814ee090df8faa653dec0c7b54fa9b8283ea6f30289"} Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.028107 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47m2l" event={"ID":"ad96bcad-395b-4844-9992-00acdf7436c2","Type":"ContainerDied","Data":"6cbc84951af1c9fb04adcfedc17cf7a2205629dcc8722ddaa8c1026d70782225"} Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.043321 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vx9fn"] Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.048665 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vx9fn"] Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.053913 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad96bcad-395b-4844-9992-00acdf7436c2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad96bcad-395b-4844-9992-00acdf7436c2" (UID: "ad96bcad-395b-4844-9992-00acdf7436c2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.060727 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qzj76"] Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.063960 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qzj76"] Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.067208 4947 scope.go:117] "RemoveContainer" containerID="8115b93e0fd4495a2b68325eacc2bd399adaeffd03e7ffbff447c266b62f3c8b" Jan 25 00:15:25 crc kubenswrapper[4947]: E0125 00:15:25.067554 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8115b93e0fd4495a2b68325eacc2bd399adaeffd03e7ffbff447c266b62f3c8b\": container with ID starting with 8115b93e0fd4495a2b68325eacc2bd399adaeffd03e7ffbff447c266b62f3c8b not found: ID does not exist" containerID="8115b93e0fd4495a2b68325eacc2bd399adaeffd03e7ffbff447c266b62f3c8b" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.067584 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8115b93e0fd4495a2b68325eacc2bd399adaeffd03e7ffbff447c266b62f3c8b"} err="failed to get container status \"8115b93e0fd4495a2b68325eacc2bd399adaeffd03e7ffbff447c266b62f3c8b\": rpc error: code = NotFound desc = could not find container \"8115b93e0fd4495a2b68325eacc2bd399adaeffd03e7ffbff447c266b62f3c8b\": container with ID starting with 8115b93e0fd4495a2b68325eacc2bd399adaeffd03e7ffbff447c266b62f3c8b not found: ID does not exist" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.067608 4947 scope.go:117] "RemoveContainer" containerID="9e14cc4195bf11e8c81096f8e93eb602df46809eac413c4e97d8f0951f263a3e" Jan 25 00:15:25 crc kubenswrapper[4947]: E0125 00:15:25.067968 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e14cc4195bf11e8c81096f8e93eb602df46809eac413c4e97d8f0951f263a3e\": container with ID starting with 9e14cc4195bf11e8c81096f8e93eb602df46809eac413c4e97d8f0951f263a3e not found: ID does not exist" containerID="9e14cc4195bf11e8c81096f8e93eb602df46809eac413c4e97d8f0951f263a3e" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.067991 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e14cc4195bf11e8c81096f8e93eb602df46809eac413c4e97d8f0951f263a3e"} err="failed to get container status \"9e14cc4195bf11e8c81096f8e93eb602df46809eac413c4e97d8f0951f263a3e\": rpc error: code = NotFound desc = could not find container \"9e14cc4195bf11e8c81096f8e93eb602df46809eac413c4e97d8f0951f263a3e\": container with ID starting with 9e14cc4195bf11e8c81096f8e93eb602df46809eac413c4e97d8f0951f263a3e not found: ID does not exist" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.068011 4947 scope.go:117] "RemoveContainer" containerID="b68921fda52bf0941dd3182cdd91346027905a40aad8abb0a9e34eef1346172c" Jan 25 00:15:25 crc kubenswrapper[4947]: E0125 00:15:25.068222 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b68921fda52bf0941dd3182cdd91346027905a40aad8abb0a9e34eef1346172c\": container with ID starting with b68921fda52bf0941dd3182cdd91346027905a40aad8abb0a9e34eef1346172c not found: ID does not exist" containerID="b68921fda52bf0941dd3182cdd91346027905a40aad8abb0a9e34eef1346172c" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.068334 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b68921fda52bf0941dd3182cdd91346027905a40aad8abb0a9e34eef1346172c"} err="failed to get container status \"b68921fda52bf0941dd3182cdd91346027905a40aad8abb0a9e34eef1346172c\": rpc error: code = NotFound desc = could not find container \"b68921fda52bf0941dd3182cdd91346027905a40aad8abb0a9e34eef1346172c\": container with ID starting with b68921fda52bf0941dd3182cdd91346027905a40aad8abb0a9e34eef1346172c not found: ID does not exist" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.068354 4947 scope.go:117] "RemoveContainer" containerID="625f0cef66386e28287f85cd89001a6182e1a9e32fef13144f9306b8fd3bf8e1" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.088472 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49263faf-29f4-481c-aafd-a271a29c209a-catalog-content\") pod \"49263faf-29f4-481c-aafd-a271a29c209a\" (UID: \"49263faf-29f4-481c-aafd-a271a29c209a\") " Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.088551 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49263faf-29f4-481c-aafd-a271a29c209a-utilities\") pod \"49263faf-29f4-481c-aafd-a271a29c209a\" (UID: \"49263faf-29f4-481c-aafd-a271a29c209a\") " Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.088579 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ndnd\" (UniqueName: \"kubernetes.io/projected/49263faf-29f4-481c-aafd-a271a29c209a-kube-api-access-7ndnd\") pod \"49263faf-29f4-481c-aafd-a271a29c209a\" (UID: \"49263faf-29f4-481c-aafd-a271a29c209a\") " Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.089224 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49263faf-29f4-481c-aafd-a271a29c209a-utilities" (OuterVolumeSpecName: "utilities") pod "49263faf-29f4-481c-aafd-a271a29c209a" (UID: "49263faf-29f4-481c-aafd-a271a29c209a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.090270 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad96bcad-395b-4844-9992-00acdf7436c2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.090648 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad96bcad-395b-4844-9992-00acdf7436c2-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.090745 4947 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa35d682-53d6-4191-9cf9-f48b9f74e858-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.090763 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhdwv\" (UniqueName: \"kubernetes.io/projected/ad96bcad-395b-4844-9992-00acdf7436c2-kube-api-access-fhdwv\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.090774 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49263faf-29f4-481c-aafd-a271a29c209a-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.090784 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8th5h\" (UniqueName: \"kubernetes.io/projected/06282146-8047-4104-b189-c896e5b7f8b9-kube-api-access-8th5h\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.090793 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxnl4\" (UniqueName: \"kubernetes.io/projected/fa35d682-53d6-4191-9cf9-f48b9f74e858-kube-api-access-sxnl4\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.090802 4947 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fa35d682-53d6-4191-9cf9-f48b9f74e858-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.090811 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06282146-8047-4104-b189-c896e5b7f8b9-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.090819 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06282146-8047-4104-b189-c896e5b7f8b9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.092708 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49263faf-29f4-481c-aafd-a271a29c209a-kube-api-access-7ndnd" (OuterVolumeSpecName: "kube-api-access-7ndnd") pod "49263faf-29f4-481c-aafd-a271a29c209a" (UID: "49263faf-29f4-481c-aafd-a271a29c209a"). InnerVolumeSpecName "kube-api-access-7ndnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.096508 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="900aeb01-050c-45b8-936c-e5f8d73ebeb5" path="/var/lib/kubelet/pods/900aeb01-050c-45b8-936c-e5f8d73ebeb5/volumes" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.096757 4947 scope.go:117] "RemoveContainer" containerID="43d814190eebb93dd6bba60983f167e4e7d8eb0cd7dcc3adc29725b6c2f90c2b" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.097171 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa35d682-53d6-4191-9cf9-f48b9f74e858" path="/var/lib/kubelet/pods/fa35d682-53d6-4191-9cf9-f48b9f74e858/volumes" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.126780 4947 scope.go:117] "RemoveContainer" containerID="3d383bbb6e2f77eb14a9b8a1dbd79f1c501ad3a1a7e6ec1145d397b4e8d1deb0" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.145402 4947 scope.go:117] "RemoveContainer" containerID="625f0cef66386e28287f85cd89001a6182e1a9e32fef13144f9306b8fd3bf8e1" Jan 25 00:15:25 crc kubenswrapper[4947]: E0125 00:15:25.145992 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"625f0cef66386e28287f85cd89001a6182e1a9e32fef13144f9306b8fd3bf8e1\": container with ID starting with 625f0cef66386e28287f85cd89001a6182e1a9e32fef13144f9306b8fd3bf8e1 not found: ID does not exist" containerID="625f0cef66386e28287f85cd89001a6182e1a9e32fef13144f9306b8fd3bf8e1" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.146101 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"625f0cef66386e28287f85cd89001a6182e1a9e32fef13144f9306b8fd3bf8e1"} err="failed to get container status \"625f0cef66386e28287f85cd89001a6182e1a9e32fef13144f9306b8fd3bf8e1\": rpc error: code = NotFound desc = could not find container \"625f0cef66386e28287f85cd89001a6182e1a9e32fef13144f9306b8fd3bf8e1\": container with ID starting with 625f0cef66386e28287f85cd89001a6182e1a9e32fef13144f9306b8fd3bf8e1 not found: ID does not exist" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.146221 4947 scope.go:117] "RemoveContainer" containerID="43d814190eebb93dd6bba60983f167e4e7d8eb0cd7dcc3adc29725b6c2f90c2b" Jan 25 00:15:25 crc kubenswrapper[4947]: E0125 00:15:25.146702 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43d814190eebb93dd6bba60983f167e4e7d8eb0cd7dcc3adc29725b6c2f90c2b\": container with ID starting with 43d814190eebb93dd6bba60983f167e4e7d8eb0cd7dcc3adc29725b6c2f90c2b not found: ID does not exist" containerID="43d814190eebb93dd6bba60983f167e4e7d8eb0cd7dcc3adc29725b6c2f90c2b" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.146751 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43d814190eebb93dd6bba60983f167e4e7d8eb0cd7dcc3adc29725b6c2f90c2b"} err="failed to get container status \"43d814190eebb93dd6bba60983f167e4e7d8eb0cd7dcc3adc29725b6c2f90c2b\": rpc error: code = NotFound desc = could not find container \"43d814190eebb93dd6bba60983f167e4e7d8eb0cd7dcc3adc29725b6c2f90c2b\": container with ID starting with 43d814190eebb93dd6bba60983f167e4e7d8eb0cd7dcc3adc29725b6c2f90c2b not found: ID does not exist" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.146786 4947 scope.go:117] "RemoveContainer" containerID="3d383bbb6e2f77eb14a9b8a1dbd79f1c501ad3a1a7e6ec1145d397b4e8d1deb0" Jan 25 00:15:25 crc kubenswrapper[4947]: E0125 00:15:25.147155 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d383bbb6e2f77eb14a9b8a1dbd79f1c501ad3a1a7e6ec1145d397b4e8d1deb0\": container with ID starting with 3d383bbb6e2f77eb14a9b8a1dbd79f1c501ad3a1a7e6ec1145d397b4e8d1deb0 not found: ID does not exist" containerID="3d383bbb6e2f77eb14a9b8a1dbd79f1c501ad3a1a7e6ec1145d397b4e8d1deb0" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.147205 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d383bbb6e2f77eb14a9b8a1dbd79f1c501ad3a1a7e6ec1145d397b4e8d1deb0"} err="failed to get container status \"3d383bbb6e2f77eb14a9b8a1dbd79f1c501ad3a1a7e6ec1145d397b4e8d1deb0\": rpc error: code = NotFound desc = could not find container \"3d383bbb6e2f77eb14a9b8a1dbd79f1c501ad3a1a7e6ec1145d397b4e8d1deb0\": container with ID starting with 3d383bbb6e2f77eb14a9b8a1dbd79f1c501ad3a1a7e6ec1145d397b4e8d1deb0 not found: ID does not exist" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.147242 4947 scope.go:117] "RemoveContainer" containerID="482fba3b03a69741fd55f9c49368ba133c9fd2f319ef4e1223d2e1af6bddd6d3" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.162753 4947 scope.go:117] "RemoveContainer" containerID="4542217595dd75eebca5222fcb5217ae0cff8df4479ef7c5469cd15bd336ba16" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.183355 4947 scope.go:117] "RemoveContainer" containerID="e3fbbc635210edef169ebdb44d20ee2090763de1a269a7c6409c3d6bf1dfb116" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.192529 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ndnd\" (UniqueName: \"kubernetes.io/projected/49263faf-29f4-481c-aafd-a271a29c209a-kube-api-access-7ndnd\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.202108 4947 scope.go:117] "RemoveContainer" containerID="482fba3b03a69741fd55f9c49368ba133c9fd2f319ef4e1223d2e1af6bddd6d3" Jan 25 00:15:25 crc kubenswrapper[4947]: E0125 00:15:25.202795 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"482fba3b03a69741fd55f9c49368ba133c9fd2f319ef4e1223d2e1af6bddd6d3\": container with ID starting with 482fba3b03a69741fd55f9c49368ba133c9fd2f319ef4e1223d2e1af6bddd6d3 not found: ID does not exist" containerID="482fba3b03a69741fd55f9c49368ba133c9fd2f319ef4e1223d2e1af6bddd6d3" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.202923 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"482fba3b03a69741fd55f9c49368ba133c9fd2f319ef4e1223d2e1af6bddd6d3"} err="failed to get container status \"482fba3b03a69741fd55f9c49368ba133c9fd2f319ef4e1223d2e1af6bddd6d3\": rpc error: code = NotFound desc = could not find container \"482fba3b03a69741fd55f9c49368ba133c9fd2f319ef4e1223d2e1af6bddd6d3\": container with ID starting with 482fba3b03a69741fd55f9c49368ba133c9fd2f319ef4e1223d2e1af6bddd6d3 not found: ID does not exist" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.203035 4947 scope.go:117] "RemoveContainer" containerID="4542217595dd75eebca5222fcb5217ae0cff8df4479ef7c5469cd15bd336ba16" Jan 25 00:15:25 crc kubenswrapper[4947]: E0125 00:15:25.203804 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4542217595dd75eebca5222fcb5217ae0cff8df4479ef7c5469cd15bd336ba16\": container with ID starting with 4542217595dd75eebca5222fcb5217ae0cff8df4479ef7c5469cd15bd336ba16 not found: ID does not exist" containerID="4542217595dd75eebca5222fcb5217ae0cff8df4479ef7c5469cd15bd336ba16" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.203856 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4542217595dd75eebca5222fcb5217ae0cff8df4479ef7c5469cd15bd336ba16"} err="failed to get container status \"4542217595dd75eebca5222fcb5217ae0cff8df4479ef7c5469cd15bd336ba16\": rpc error: code = NotFound desc = could not find container \"4542217595dd75eebca5222fcb5217ae0cff8df4479ef7c5469cd15bd336ba16\": container with ID starting with 4542217595dd75eebca5222fcb5217ae0cff8df4479ef7c5469cd15bd336ba16 not found: ID does not exist" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.203886 4947 scope.go:117] "RemoveContainer" containerID="e3fbbc635210edef169ebdb44d20ee2090763de1a269a7c6409c3d6bf1dfb116" Jan 25 00:15:25 crc kubenswrapper[4947]: E0125 00:15:25.204358 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3fbbc635210edef169ebdb44d20ee2090763de1a269a7c6409c3d6bf1dfb116\": container with ID starting with e3fbbc635210edef169ebdb44d20ee2090763de1a269a7c6409c3d6bf1dfb116 not found: ID does not exist" containerID="e3fbbc635210edef169ebdb44d20ee2090763de1a269a7c6409c3d6bf1dfb116" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.204391 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3fbbc635210edef169ebdb44d20ee2090763de1a269a7c6409c3d6bf1dfb116"} err="failed to get container status \"e3fbbc635210edef169ebdb44d20ee2090763de1a269a7c6409c3d6bf1dfb116\": rpc error: code = NotFound desc = could not find container \"e3fbbc635210edef169ebdb44d20ee2090763de1a269a7c6409c3d6bf1dfb116\": container with ID starting with e3fbbc635210edef169ebdb44d20ee2090763de1a269a7c6409c3d6bf1dfb116 not found: ID does not exist" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.204410 4947 scope.go:117] "RemoveContainer" containerID="c76cf10a240dd3e6abf0e6c61fc65ab4c06d545e02cbd13850e4b3a1505a6b8e" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.223179 4947 scope.go:117] "RemoveContainer" containerID="7a4ab516c4e08d71314a18aa4d477dfc08e5a573c06117757b88ea461a13c369" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.223919 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49263faf-29f4-481c-aafd-a271a29c209a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "49263faf-29f4-481c-aafd-a271a29c209a" (UID: "49263faf-29f4-481c-aafd-a271a29c209a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.239627 4947 scope.go:117] "RemoveContainer" containerID="c76cf10a240dd3e6abf0e6c61fc65ab4c06d545e02cbd13850e4b3a1505a6b8e" Jan 25 00:15:25 crc kubenswrapper[4947]: E0125 00:15:25.240385 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c76cf10a240dd3e6abf0e6c61fc65ab4c06d545e02cbd13850e4b3a1505a6b8e\": container with ID starting with c76cf10a240dd3e6abf0e6c61fc65ab4c06d545e02cbd13850e4b3a1505a6b8e not found: ID does not exist" containerID="c76cf10a240dd3e6abf0e6c61fc65ab4c06d545e02cbd13850e4b3a1505a6b8e" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.240436 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c76cf10a240dd3e6abf0e6c61fc65ab4c06d545e02cbd13850e4b3a1505a6b8e"} err="failed to get container status \"c76cf10a240dd3e6abf0e6c61fc65ab4c06d545e02cbd13850e4b3a1505a6b8e\": rpc error: code = NotFound desc = could not find container \"c76cf10a240dd3e6abf0e6c61fc65ab4c06d545e02cbd13850e4b3a1505a6b8e\": container with ID starting with c76cf10a240dd3e6abf0e6c61fc65ab4c06d545e02cbd13850e4b3a1505a6b8e not found: ID does not exist" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.240472 4947 scope.go:117] "RemoveContainer" containerID="7a4ab516c4e08d71314a18aa4d477dfc08e5a573c06117757b88ea461a13c369" Jan 25 00:15:25 crc kubenswrapper[4947]: E0125 00:15:25.241030 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a4ab516c4e08d71314a18aa4d477dfc08e5a573c06117757b88ea461a13c369\": container with ID starting with 7a4ab516c4e08d71314a18aa4d477dfc08e5a573c06117757b88ea461a13c369 not found: ID does not exist" containerID="7a4ab516c4e08d71314a18aa4d477dfc08e5a573c06117757b88ea461a13c369" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.241057 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a4ab516c4e08d71314a18aa4d477dfc08e5a573c06117757b88ea461a13c369"} err="failed to get container status \"7a4ab516c4e08d71314a18aa4d477dfc08e5a573c06117757b88ea461a13c369\": rpc error: code = NotFound desc = could not find container \"7a4ab516c4e08d71314a18aa4d477dfc08e5a573c06117757b88ea461a13c369\": container with ID starting with 7a4ab516c4e08d71314a18aa4d477dfc08e5a573c06117757b88ea461a13c369 not found: ID does not exist" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.241075 4947 scope.go:117] "RemoveContainer" containerID="d9ef5b68bf9e0e13efe5f814ee090df8faa653dec0c7b54fa9b8283ea6f30289" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.261512 4947 scope.go:117] "RemoveContainer" containerID="f14331188acade273ec58d376546a2909cdf07b547bdce264ef978c7e825b2d7" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.277698 4947 scope.go:117] "RemoveContainer" containerID="95a8066ea5df0bbdd562663b51cd869b66acf5c56464c5567e5718c195b7c643" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.294469 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49263faf-29f4-481c-aafd-a271a29c209a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.314930 4947 scope.go:117] "RemoveContainer" containerID="d9ef5b68bf9e0e13efe5f814ee090df8faa653dec0c7b54fa9b8283ea6f30289" Jan 25 00:15:25 crc kubenswrapper[4947]: E0125 00:15:25.316820 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9ef5b68bf9e0e13efe5f814ee090df8faa653dec0c7b54fa9b8283ea6f30289\": container with ID starting with d9ef5b68bf9e0e13efe5f814ee090df8faa653dec0c7b54fa9b8283ea6f30289 not found: ID does not exist" containerID="d9ef5b68bf9e0e13efe5f814ee090df8faa653dec0c7b54fa9b8283ea6f30289" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.316872 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9ef5b68bf9e0e13efe5f814ee090df8faa653dec0c7b54fa9b8283ea6f30289"} err="failed to get container status \"d9ef5b68bf9e0e13efe5f814ee090df8faa653dec0c7b54fa9b8283ea6f30289\": rpc error: code = NotFound desc = could not find container \"d9ef5b68bf9e0e13efe5f814ee090df8faa653dec0c7b54fa9b8283ea6f30289\": container with ID starting with d9ef5b68bf9e0e13efe5f814ee090df8faa653dec0c7b54fa9b8283ea6f30289 not found: ID does not exist" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.316906 4947 scope.go:117] "RemoveContainer" containerID="f14331188acade273ec58d376546a2909cdf07b547bdce264ef978c7e825b2d7" Jan 25 00:15:25 crc kubenswrapper[4947]: E0125 00:15:25.318602 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f14331188acade273ec58d376546a2909cdf07b547bdce264ef978c7e825b2d7\": container with ID starting with f14331188acade273ec58d376546a2909cdf07b547bdce264ef978c7e825b2d7 not found: ID does not exist" containerID="f14331188acade273ec58d376546a2909cdf07b547bdce264ef978c7e825b2d7" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.318632 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f14331188acade273ec58d376546a2909cdf07b547bdce264ef978c7e825b2d7"} err="failed to get container status \"f14331188acade273ec58d376546a2909cdf07b547bdce264ef978c7e825b2d7\": rpc error: code = NotFound desc = could not find container \"f14331188acade273ec58d376546a2909cdf07b547bdce264ef978c7e825b2d7\": container with ID starting with f14331188acade273ec58d376546a2909cdf07b547bdce264ef978c7e825b2d7 not found: ID does not exist" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.318647 4947 scope.go:117] "RemoveContainer" containerID="95a8066ea5df0bbdd562663b51cd869b66acf5c56464c5567e5718c195b7c643" Jan 25 00:15:25 crc kubenswrapper[4947]: E0125 00:15:25.318912 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95a8066ea5df0bbdd562663b51cd869b66acf5c56464c5567e5718c195b7c643\": container with ID starting with 95a8066ea5df0bbdd562663b51cd869b66acf5c56464c5567e5718c195b7c643 not found: ID does not exist" containerID="95a8066ea5df0bbdd562663b51cd869b66acf5c56464c5567e5718c195b7c643" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.318929 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95a8066ea5df0bbdd562663b51cd869b66acf5c56464c5567e5718c195b7c643"} err="failed to get container status \"95a8066ea5df0bbdd562663b51cd869b66acf5c56464c5567e5718c195b7c643\": rpc error: code = NotFound desc = could not find container \"95a8066ea5df0bbdd562663b51cd869b66acf5c56464c5567e5718c195b7c643\": container with ID starting with 95a8066ea5df0bbdd562663b51cd869b66acf5c56464c5567e5718c195b7c643 not found: ID does not exist" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.321287 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wwwnp"] Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.326941 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wwwnp"] Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.351309 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ltw77"] Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.358560 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ltw77"] Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.365268 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-47m2l"] Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.369405 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-47m2l"] Jan 25 00:15:25 crc kubenswrapper[4947]: E0125 00:15:25.460414 4947 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49263faf_29f4_481c_aafd_a271a29c209a.slice/crio-23655937ab043534ca01347d9a2964b60c41f2a6eae0705e6c094b13084701de\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49263faf_29f4_481c_aafd_a271a29c209a.slice\": RecentStats: unable to find data in memory cache]" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.035430 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mbj6z" event={"ID":"94a09856-1120-4003-a601-ee3c9121eb51","Type":"ContainerStarted","Data":"51d0122ca4b1dc3ac3173be4b1a89e3a7fcd2485ee9d57122adccc93f77adbb4"} Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.035772 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-mbj6z" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.035784 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mbj6z" event={"ID":"94a09856-1120-4003-a601-ee3c9121eb51","Type":"ContainerStarted","Data":"1511ab3dd5a6cf0320e3eea749afafe601b00c1cb5791934931bffb2682194fa"} Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.039214 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-mbj6z" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.055934 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-mbj6z" podStartSLOduration=2.055916232 podStartE2EDuration="2.055916232s" podCreationTimestamp="2026-01-25 00:15:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:15:26.055894301 +0000 UTC m=+365.288884761" watchObservedRunningTime="2026-01-25 00:15:26.055916232 +0000 UTC m=+365.288906672" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.640911 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lkvvh"] Jan 25 00:15:26 crc kubenswrapper[4947]: E0125 00:15:26.641167 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad96bcad-395b-4844-9992-00acdf7436c2" containerName="extract-utilities" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641184 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad96bcad-395b-4844-9992-00acdf7436c2" containerName="extract-utilities" Jan 25 00:15:26 crc kubenswrapper[4947]: E0125 00:15:26.641198 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa35d682-53d6-4191-9cf9-f48b9f74e858" containerName="marketplace-operator" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641205 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa35d682-53d6-4191-9cf9-f48b9f74e858" containerName="marketplace-operator" Jan 25 00:15:26 crc kubenswrapper[4947]: E0125 00:15:26.641221 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49263faf-29f4-481c-aafd-a271a29c209a" containerName="extract-content" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641229 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="49263faf-29f4-481c-aafd-a271a29c209a" containerName="extract-content" Jan 25 00:15:26 crc kubenswrapper[4947]: E0125 00:15:26.641241 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06282146-8047-4104-b189-c896e5b7f8b9" containerName="extract-content" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641249 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="06282146-8047-4104-b189-c896e5b7f8b9" containerName="extract-content" Jan 25 00:15:26 crc kubenswrapper[4947]: E0125 00:15:26.641261 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49263faf-29f4-481c-aafd-a271a29c209a" containerName="registry-server" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641269 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="49263faf-29f4-481c-aafd-a271a29c209a" containerName="registry-server" Jan 25 00:15:26 crc kubenswrapper[4947]: E0125 00:15:26.641281 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900aeb01-050c-45b8-936c-e5f8d73ebeb5" containerName="registry-server" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641288 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="900aeb01-050c-45b8-936c-e5f8d73ebeb5" containerName="registry-server" Jan 25 00:15:26 crc kubenswrapper[4947]: E0125 00:15:26.641300 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900aeb01-050c-45b8-936c-e5f8d73ebeb5" containerName="extract-utilities" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641307 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="900aeb01-050c-45b8-936c-e5f8d73ebeb5" containerName="extract-utilities" Jan 25 00:15:26 crc kubenswrapper[4947]: E0125 00:15:26.641317 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49263faf-29f4-481c-aafd-a271a29c209a" containerName="extract-utilities" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641325 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="49263faf-29f4-481c-aafd-a271a29c209a" containerName="extract-utilities" Jan 25 00:15:26 crc kubenswrapper[4947]: E0125 00:15:26.641334 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06282146-8047-4104-b189-c896e5b7f8b9" containerName="extract-utilities" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641342 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="06282146-8047-4104-b189-c896e5b7f8b9" containerName="extract-utilities" Jan 25 00:15:26 crc kubenswrapper[4947]: E0125 00:15:26.641351 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad96bcad-395b-4844-9992-00acdf7436c2" containerName="extract-content" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641358 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad96bcad-395b-4844-9992-00acdf7436c2" containerName="extract-content" Jan 25 00:15:26 crc kubenswrapper[4947]: E0125 00:15:26.641367 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06282146-8047-4104-b189-c896e5b7f8b9" containerName="registry-server" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641374 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="06282146-8047-4104-b189-c896e5b7f8b9" containerName="registry-server" Jan 25 00:15:26 crc kubenswrapper[4947]: E0125 00:15:26.641384 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900aeb01-050c-45b8-936c-e5f8d73ebeb5" containerName="extract-content" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641391 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="900aeb01-050c-45b8-936c-e5f8d73ebeb5" containerName="extract-content" Jan 25 00:15:26 crc kubenswrapper[4947]: E0125 00:15:26.641401 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad96bcad-395b-4844-9992-00acdf7436c2" containerName="registry-server" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641408 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad96bcad-395b-4844-9992-00acdf7436c2" containerName="registry-server" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641528 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="49263faf-29f4-481c-aafd-a271a29c209a" containerName="registry-server" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641541 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad96bcad-395b-4844-9992-00acdf7436c2" containerName="registry-server" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641556 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa35d682-53d6-4191-9cf9-f48b9f74e858" containerName="marketplace-operator" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641565 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="900aeb01-050c-45b8-936c-e5f8d73ebeb5" containerName="registry-server" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641576 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="06282146-8047-4104-b189-c896e5b7f8b9" containerName="registry-server" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641585 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa35d682-53d6-4191-9cf9-f48b9f74e858" containerName="marketplace-operator" Jan 25 00:15:26 crc kubenswrapper[4947]: E0125 00:15:26.641696 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa35d682-53d6-4191-9cf9-f48b9f74e858" containerName="marketplace-operator" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641704 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa35d682-53d6-4191-9cf9-f48b9f74e858" containerName="marketplace-operator" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.642406 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lkvvh" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.645870 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.647789 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lkvvh"] Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.813350 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd72v\" (UniqueName: \"kubernetes.io/projected/8f150ea3-0af6-4206-9d74-e15f901e571b-kube-api-access-kd72v\") pod \"certified-operators-lkvvh\" (UID: \"8f150ea3-0af6-4206-9d74-e15f901e571b\") " pod="openshift-marketplace/certified-operators-lkvvh" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.813448 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f150ea3-0af6-4206-9d74-e15f901e571b-catalog-content\") pod \"certified-operators-lkvvh\" (UID: \"8f150ea3-0af6-4206-9d74-e15f901e571b\") " pod="openshift-marketplace/certified-operators-lkvvh" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.813514 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f150ea3-0af6-4206-9d74-e15f901e571b-utilities\") pod \"certified-operators-lkvvh\" (UID: \"8f150ea3-0af6-4206-9d74-e15f901e571b\") " pod="openshift-marketplace/certified-operators-lkvvh" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.914469 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd72v\" (UniqueName: \"kubernetes.io/projected/8f150ea3-0af6-4206-9d74-e15f901e571b-kube-api-access-kd72v\") pod \"certified-operators-lkvvh\" (UID: \"8f150ea3-0af6-4206-9d74-e15f901e571b\") " pod="openshift-marketplace/certified-operators-lkvvh" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.914619 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f150ea3-0af6-4206-9d74-e15f901e571b-catalog-content\") pod \"certified-operators-lkvvh\" (UID: \"8f150ea3-0af6-4206-9d74-e15f901e571b\") " pod="openshift-marketplace/certified-operators-lkvvh" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.915238 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f150ea3-0af6-4206-9d74-e15f901e571b-catalog-content\") pod \"certified-operators-lkvvh\" (UID: \"8f150ea3-0af6-4206-9d74-e15f901e571b\") " pod="openshift-marketplace/certified-operators-lkvvh" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.915395 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f150ea3-0af6-4206-9d74-e15f901e571b-utilities\") pod \"certified-operators-lkvvh\" (UID: \"8f150ea3-0af6-4206-9d74-e15f901e571b\") " pod="openshift-marketplace/certified-operators-lkvvh" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.915774 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f150ea3-0af6-4206-9d74-e15f901e571b-utilities\") pod \"certified-operators-lkvvh\" (UID: \"8f150ea3-0af6-4206-9d74-e15f901e571b\") " pod="openshift-marketplace/certified-operators-lkvvh" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.934819 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd72v\" (UniqueName: \"kubernetes.io/projected/8f150ea3-0af6-4206-9d74-e15f901e571b-kube-api-access-kd72v\") pod \"certified-operators-lkvvh\" (UID: \"8f150ea3-0af6-4206-9d74-e15f901e571b\") " pod="openshift-marketplace/certified-operators-lkvvh" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.976915 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lkvvh" Jan 25 00:15:27 crc kubenswrapper[4947]: I0125 00:15:27.096550 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06282146-8047-4104-b189-c896e5b7f8b9" path="/var/lib/kubelet/pods/06282146-8047-4104-b189-c896e5b7f8b9/volumes" Jan 25 00:15:27 crc kubenswrapper[4947]: I0125 00:15:27.097296 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49263faf-29f4-481c-aafd-a271a29c209a" path="/var/lib/kubelet/pods/49263faf-29f4-481c-aafd-a271a29c209a/volumes" Jan 25 00:15:27 crc kubenswrapper[4947]: I0125 00:15:27.097979 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad96bcad-395b-4844-9992-00acdf7436c2" path="/var/lib/kubelet/pods/ad96bcad-395b-4844-9992-00acdf7436c2/volumes" Jan 25 00:15:27 crc kubenswrapper[4947]: I0125 00:15:27.172985 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lkvvh"] Jan 25 00:15:27 crc kubenswrapper[4947]: I0125 00:15:27.631035 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hwxx4"] Jan 25 00:15:27 crc kubenswrapper[4947]: I0125 00:15:27.634271 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hwxx4" Jan 25 00:15:27 crc kubenswrapper[4947]: I0125 00:15:27.637555 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 25 00:15:27 crc kubenswrapper[4947]: I0125 00:15:27.643170 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hwxx4"] Jan 25 00:15:27 crc kubenswrapper[4947]: I0125 00:15:27.724704 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24ae0891-d29f-45fe-be30-a46f76a39dda-utilities\") pod \"redhat-marketplace-hwxx4\" (UID: \"24ae0891-d29f-45fe-be30-a46f76a39dda\") " pod="openshift-marketplace/redhat-marketplace-hwxx4" Jan 25 00:15:27 crc kubenswrapper[4947]: I0125 00:15:27.724760 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24ae0891-d29f-45fe-be30-a46f76a39dda-catalog-content\") pod \"redhat-marketplace-hwxx4\" (UID: \"24ae0891-d29f-45fe-be30-a46f76a39dda\") " pod="openshift-marketplace/redhat-marketplace-hwxx4" Jan 25 00:15:27 crc kubenswrapper[4947]: I0125 00:15:27.724788 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdqff\" (UniqueName: \"kubernetes.io/projected/24ae0891-d29f-45fe-be30-a46f76a39dda-kube-api-access-hdqff\") pod \"redhat-marketplace-hwxx4\" (UID: \"24ae0891-d29f-45fe-be30-a46f76a39dda\") " pod="openshift-marketplace/redhat-marketplace-hwxx4" Jan 25 00:15:27 crc kubenswrapper[4947]: I0125 00:15:27.825535 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24ae0891-d29f-45fe-be30-a46f76a39dda-utilities\") pod \"redhat-marketplace-hwxx4\" (UID: \"24ae0891-d29f-45fe-be30-a46f76a39dda\") " pod="openshift-marketplace/redhat-marketplace-hwxx4" Jan 25 00:15:27 crc kubenswrapper[4947]: I0125 00:15:27.825594 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24ae0891-d29f-45fe-be30-a46f76a39dda-catalog-content\") pod \"redhat-marketplace-hwxx4\" (UID: \"24ae0891-d29f-45fe-be30-a46f76a39dda\") " pod="openshift-marketplace/redhat-marketplace-hwxx4" Jan 25 00:15:27 crc kubenswrapper[4947]: I0125 00:15:27.825617 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdqff\" (UniqueName: \"kubernetes.io/projected/24ae0891-d29f-45fe-be30-a46f76a39dda-kube-api-access-hdqff\") pod \"redhat-marketplace-hwxx4\" (UID: \"24ae0891-d29f-45fe-be30-a46f76a39dda\") " pod="openshift-marketplace/redhat-marketplace-hwxx4" Jan 25 00:15:27 crc kubenswrapper[4947]: I0125 00:15:27.826207 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24ae0891-d29f-45fe-be30-a46f76a39dda-utilities\") pod \"redhat-marketplace-hwxx4\" (UID: \"24ae0891-d29f-45fe-be30-a46f76a39dda\") " pod="openshift-marketplace/redhat-marketplace-hwxx4" Jan 25 00:15:27 crc kubenswrapper[4947]: I0125 00:15:27.826243 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24ae0891-d29f-45fe-be30-a46f76a39dda-catalog-content\") pod \"redhat-marketplace-hwxx4\" (UID: \"24ae0891-d29f-45fe-be30-a46f76a39dda\") " pod="openshift-marketplace/redhat-marketplace-hwxx4" Jan 25 00:15:27 crc kubenswrapper[4947]: I0125 00:15:27.842913 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdqff\" (UniqueName: \"kubernetes.io/projected/24ae0891-d29f-45fe-be30-a46f76a39dda-kube-api-access-hdqff\") pod \"redhat-marketplace-hwxx4\" (UID: \"24ae0891-d29f-45fe-be30-a46f76a39dda\") " pod="openshift-marketplace/redhat-marketplace-hwxx4" Jan 25 00:15:27 crc kubenswrapper[4947]: I0125 00:15:27.954646 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hwxx4" Jan 25 00:15:28 crc kubenswrapper[4947]: I0125 00:15:28.059358 4947 generic.go:334] "Generic (PLEG): container finished" podID="8f150ea3-0af6-4206-9d74-e15f901e571b" containerID="c325dde1e0617483647b9a62738677df1b58ca118c1311ba35f40be31385b72f" exitCode=0 Jan 25 00:15:28 crc kubenswrapper[4947]: I0125 00:15:28.059420 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lkvvh" event={"ID":"8f150ea3-0af6-4206-9d74-e15f901e571b","Type":"ContainerDied","Data":"c325dde1e0617483647b9a62738677df1b58ca118c1311ba35f40be31385b72f"} Jan 25 00:15:28 crc kubenswrapper[4947]: I0125 00:15:28.059763 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lkvvh" event={"ID":"8f150ea3-0af6-4206-9d74-e15f901e571b","Type":"ContainerStarted","Data":"343b3cc67b71960c0db3cda1ab8b65d5f69c686aa12b39af56ede8682e25082d"} Jan 25 00:15:28 crc kubenswrapper[4947]: I0125 00:15:28.392538 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hwxx4"] Jan 25 00:15:28 crc kubenswrapper[4947]: W0125 00:15:28.397676 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24ae0891_d29f_45fe_be30_a46f76a39dda.slice/crio-aea55c0ecb2555c344ea84eb69f61a3ca31d0786e037f7a2f29477da1e97cbae WatchSource:0}: Error finding container aea55c0ecb2555c344ea84eb69f61a3ca31d0786e037f7a2f29477da1e97cbae: Status 404 returned error can't find the container with id aea55c0ecb2555c344ea84eb69f61a3ca31d0786e037f7a2f29477da1e97cbae Jan 25 00:15:29 crc kubenswrapper[4947]: I0125 00:15:29.036086 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m2ddl"] Jan 25 00:15:29 crc kubenswrapper[4947]: I0125 00:15:29.039369 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m2ddl" Jan 25 00:15:29 crc kubenswrapper[4947]: I0125 00:15:29.041056 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 25 00:15:29 crc kubenswrapper[4947]: I0125 00:15:29.054585 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m2ddl"] Jan 25 00:15:29 crc kubenswrapper[4947]: I0125 00:15:29.065898 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lkvvh" event={"ID":"8f150ea3-0af6-4206-9d74-e15f901e571b","Type":"ContainerStarted","Data":"a7c6bc9f465b60198c1a17fb7898cdb140f380c0e39c47f7d0c129d8e7b0123b"} Jan 25 00:15:29 crc kubenswrapper[4947]: I0125 00:15:29.067646 4947 generic.go:334] "Generic (PLEG): container finished" podID="24ae0891-d29f-45fe-be30-a46f76a39dda" containerID="3afef73f532178d1f4ce7234f366893070291c212faf97513ba9f77386409c97" exitCode=0 Jan 25 00:15:29 crc kubenswrapper[4947]: I0125 00:15:29.067683 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwxx4" event={"ID":"24ae0891-d29f-45fe-be30-a46f76a39dda","Type":"ContainerDied","Data":"3afef73f532178d1f4ce7234f366893070291c212faf97513ba9f77386409c97"} Jan 25 00:15:29 crc kubenswrapper[4947]: I0125 00:15:29.067716 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwxx4" event={"ID":"24ae0891-d29f-45fe-be30-a46f76a39dda","Type":"ContainerStarted","Data":"aea55c0ecb2555c344ea84eb69f61a3ca31d0786e037f7a2f29477da1e97cbae"} Jan 25 00:15:29 crc kubenswrapper[4947]: I0125 00:15:29.149832 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8adfaf1-4e17-430c-970e-1cbf2e58c18a-utilities\") pod \"redhat-operators-m2ddl\" (UID: \"e8adfaf1-4e17-430c-970e-1cbf2e58c18a\") " pod="openshift-marketplace/redhat-operators-m2ddl" Jan 25 00:15:29 crc kubenswrapper[4947]: I0125 00:15:29.149897 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8adfaf1-4e17-430c-970e-1cbf2e58c18a-catalog-content\") pod \"redhat-operators-m2ddl\" (UID: \"e8adfaf1-4e17-430c-970e-1cbf2e58c18a\") " pod="openshift-marketplace/redhat-operators-m2ddl" Jan 25 00:15:29 crc kubenswrapper[4947]: I0125 00:15:29.149927 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q76j\" (UniqueName: \"kubernetes.io/projected/e8adfaf1-4e17-430c-970e-1cbf2e58c18a-kube-api-access-9q76j\") pod \"redhat-operators-m2ddl\" (UID: \"e8adfaf1-4e17-430c-970e-1cbf2e58c18a\") " pod="openshift-marketplace/redhat-operators-m2ddl" Jan 25 00:15:29 crc kubenswrapper[4947]: I0125 00:15:29.250932 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8adfaf1-4e17-430c-970e-1cbf2e58c18a-utilities\") pod \"redhat-operators-m2ddl\" (UID: \"e8adfaf1-4e17-430c-970e-1cbf2e58c18a\") " pod="openshift-marketplace/redhat-operators-m2ddl" Jan 25 00:15:29 crc kubenswrapper[4947]: I0125 00:15:29.251018 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8adfaf1-4e17-430c-970e-1cbf2e58c18a-catalog-content\") pod \"redhat-operators-m2ddl\" (UID: \"e8adfaf1-4e17-430c-970e-1cbf2e58c18a\") " pod="openshift-marketplace/redhat-operators-m2ddl" Jan 25 00:15:29 crc kubenswrapper[4947]: I0125 00:15:29.251043 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q76j\" (UniqueName: \"kubernetes.io/projected/e8adfaf1-4e17-430c-970e-1cbf2e58c18a-kube-api-access-9q76j\") pod \"redhat-operators-m2ddl\" (UID: \"e8adfaf1-4e17-430c-970e-1cbf2e58c18a\") " pod="openshift-marketplace/redhat-operators-m2ddl" Jan 25 00:15:29 crc kubenswrapper[4947]: I0125 00:15:29.252242 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8adfaf1-4e17-430c-970e-1cbf2e58c18a-utilities\") pod \"redhat-operators-m2ddl\" (UID: \"e8adfaf1-4e17-430c-970e-1cbf2e58c18a\") " pod="openshift-marketplace/redhat-operators-m2ddl" Jan 25 00:15:29 crc kubenswrapper[4947]: I0125 00:15:29.252379 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8adfaf1-4e17-430c-970e-1cbf2e58c18a-catalog-content\") pod \"redhat-operators-m2ddl\" (UID: \"e8adfaf1-4e17-430c-970e-1cbf2e58c18a\") " pod="openshift-marketplace/redhat-operators-m2ddl" Jan 25 00:15:29 crc kubenswrapper[4947]: I0125 00:15:29.269925 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q76j\" (UniqueName: \"kubernetes.io/projected/e8adfaf1-4e17-430c-970e-1cbf2e58c18a-kube-api-access-9q76j\") pod \"redhat-operators-m2ddl\" (UID: \"e8adfaf1-4e17-430c-970e-1cbf2e58c18a\") " pod="openshift-marketplace/redhat-operators-m2ddl" Jan 25 00:15:29 crc kubenswrapper[4947]: I0125 00:15:29.363698 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m2ddl" Jan 25 00:15:29 crc kubenswrapper[4947]: I0125 00:15:29.772846 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m2ddl"] Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.028102 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g7982"] Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.029778 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g7982" Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.035566 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g7982"] Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.038797 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.077064 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwxx4" event={"ID":"24ae0891-d29f-45fe-be30-a46f76a39dda","Type":"ContainerStarted","Data":"24e22aa04c846740854e7340e2b14eef4110af3d206baaf96197adb243fb1870"} Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.080997 4947 generic.go:334] "Generic (PLEG): container finished" podID="e8adfaf1-4e17-430c-970e-1cbf2e58c18a" containerID="8b87350da3c79cd080e933163c12159c1bea5e8c863270f954810d9873b9c4f6" exitCode=0 Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.081053 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m2ddl" event={"ID":"e8adfaf1-4e17-430c-970e-1cbf2e58c18a","Type":"ContainerDied","Data":"8b87350da3c79cd080e933163c12159c1bea5e8c863270f954810d9873b9c4f6"} Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.081072 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m2ddl" event={"ID":"e8adfaf1-4e17-430c-970e-1cbf2e58c18a","Type":"ContainerStarted","Data":"918fd92f2892d9b2c9e7e47c9bb4ea407961cfb8ea459b27470f445d87973b27"} Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.086741 4947 generic.go:334] "Generic (PLEG): container finished" podID="8f150ea3-0af6-4206-9d74-e15f901e571b" containerID="a7c6bc9f465b60198c1a17fb7898cdb140f380c0e39c47f7d0c129d8e7b0123b" exitCode=0 Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.087042 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lkvvh" event={"ID":"8f150ea3-0af6-4206-9d74-e15f901e571b","Type":"ContainerDied","Data":"a7c6bc9f465b60198c1a17fb7898cdb140f380c0e39c47f7d0c129d8e7b0123b"} Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.162794 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e39c693-6291-4810-863e-fd3e5cd45fbc-catalog-content\") pod \"community-operators-g7982\" (UID: \"5e39c693-6291-4810-863e-fd3e5cd45fbc\") " pod="openshift-marketplace/community-operators-g7982" Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.162868 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2btf\" (UniqueName: \"kubernetes.io/projected/5e39c693-6291-4810-863e-fd3e5cd45fbc-kube-api-access-h2btf\") pod \"community-operators-g7982\" (UID: \"5e39c693-6291-4810-863e-fd3e5cd45fbc\") " pod="openshift-marketplace/community-operators-g7982" Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.162890 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e39c693-6291-4810-863e-fd3e5cd45fbc-utilities\") pod \"community-operators-g7982\" (UID: \"5e39c693-6291-4810-863e-fd3e5cd45fbc\") " pod="openshift-marketplace/community-operators-g7982" Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.264548 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e39c693-6291-4810-863e-fd3e5cd45fbc-catalog-content\") pod \"community-operators-g7982\" (UID: \"5e39c693-6291-4810-863e-fd3e5cd45fbc\") " pod="openshift-marketplace/community-operators-g7982" Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.264635 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2btf\" (UniqueName: \"kubernetes.io/projected/5e39c693-6291-4810-863e-fd3e5cd45fbc-kube-api-access-h2btf\") pod \"community-operators-g7982\" (UID: \"5e39c693-6291-4810-863e-fd3e5cd45fbc\") " pod="openshift-marketplace/community-operators-g7982" Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.264668 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e39c693-6291-4810-863e-fd3e5cd45fbc-utilities\") pod \"community-operators-g7982\" (UID: \"5e39c693-6291-4810-863e-fd3e5cd45fbc\") " pod="openshift-marketplace/community-operators-g7982" Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.265230 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e39c693-6291-4810-863e-fd3e5cd45fbc-utilities\") pod \"community-operators-g7982\" (UID: \"5e39c693-6291-4810-863e-fd3e5cd45fbc\") " pod="openshift-marketplace/community-operators-g7982" Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.265507 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e39c693-6291-4810-863e-fd3e5cd45fbc-catalog-content\") pod \"community-operators-g7982\" (UID: \"5e39c693-6291-4810-863e-fd3e5cd45fbc\") " pod="openshift-marketplace/community-operators-g7982" Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.296313 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2btf\" (UniqueName: \"kubernetes.io/projected/5e39c693-6291-4810-863e-fd3e5cd45fbc-kube-api-access-h2btf\") pod \"community-operators-g7982\" (UID: \"5e39c693-6291-4810-863e-fd3e5cd45fbc\") " pod="openshift-marketplace/community-operators-g7982" Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.353190 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g7982" Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.757403 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g7982"] Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.978683 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:31 crc kubenswrapper[4947]: I0125 00:15:31.033611 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mprs4"] Jan 25 00:15:31 crc kubenswrapper[4947]: I0125 00:15:31.108526 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lkvvh" event={"ID":"8f150ea3-0af6-4206-9d74-e15f901e571b","Type":"ContainerStarted","Data":"ce66a0985d325b64c31e4b39c0896b88ee430ddaccc6628719c4b0baa20664ca"} Jan 25 00:15:31 crc kubenswrapper[4947]: I0125 00:15:31.111704 4947 generic.go:334] "Generic (PLEG): container finished" podID="24ae0891-d29f-45fe-be30-a46f76a39dda" containerID="24e22aa04c846740854e7340e2b14eef4110af3d206baaf96197adb243fb1870" exitCode=0 Jan 25 00:15:31 crc kubenswrapper[4947]: I0125 00:15:31.111783 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwxx4" event={"ID":"24ae0891-d29f-45fe-be30-a46f76a39dda","Type":"ContainerDied","Data":"24e22aa04c846740854e7340e2b14eef4110af3d206baaf96197adb243fb1870"} Jan 25 00:15:31 crc kubenswrapper[4947]: I0125 00:15:31.111804 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwxx4" event={"ID":"24ae0891-d29f-45fe-be30-a46f76a39dda","Type":"ContainerStarted","Data":"e8ba3522af6a9edf7ea36415a9ce61bd4e6daf8766d9510d0b02847715256d0e"} Jan 25 00:15:31 crc kubenswrapper[4947]: I0125 00:15:31.115377 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m2ddl" event={"ID":"e8adfaf1-4e17-430c-970e-1cbf2e58c18a","Type":"ContainerStarted","Data":"45b6caad89697237a873ac3dd397ab2e0d67a08d04b447395ae4e0b4fcea4215"} Jan 25 00:15:31 crc kubenswrapper[4947]: I0125 00:15:31.117260 4947 generic.go:334] "Generic (PLEG): container finished" podID="5e39c693-6291-4810-863e-fd3e5cd45fbc" containerID="0d6bf80afeab0eb1908534cef87952e842b53fdac5af63be09110db5a79ae1e6" exitCode=0 Jan 25 00:15:31 crc kubenswrapper[4947]: I0125 00:15:31.117316 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g7982" event={"ID":"5e39c693-6291-4810-863e-fd3e5cd45fbc","Type":"ContainerDied","Data":"0d6bf80afeab0eb1908534cef87952e842b53fdac5af63be09110db5a79ae1e6"} Jan 25 00:15:31 crc kubenswrapper[4947]: I0125 00:15:31.117345 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g7982" event={"ID":"5e39c693-6291-4810-863e-fd3e5cd45fbc","Type":"ContainerStarted","Data":"3e42c04272d04088056859078860477dd028a9ec745baadff96bbd49b27609a8"} Jan 25 00:15:31 crc kubenswrapper[4947]: I0125 00:15:31.170769 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hwxx4" podStartSLOduration=2.544359241 podStartE2EDuration="4.170740929s" podCreationTimestamp="2026-01-25 00:15:27 +0000 UTC" firstStartedPulling="2026-01-25 00:15:29.068908881 +0000 UTC m=+368.301899321" lastFinishedPulling="2026-01-25 00:15:30.695290579 +0000 UTC m=+369.928281009" observedRunningTime="2026-01-25 00:15:31.168641854 +0000 UTC m=+370.401632294" watchObservedRunningTime="2026-01-25 00:15:31.170740929 +0000 UTC m=+370.403731369" Jan 25 00:15:31 crc kubenswrapper[4947]: I0125 00:15:31.183623 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lkvvh" podStartSLOduration=2.711128419 podStartE2EDuration="5.18360395s" podCreationTimestamp="2026-01-25 00:15:26 +0000 UTC" firstStartedPulling="2026-01-25 00:15:28.06089756 +0000 UTC m=+367.293888000" lastFinishedPulling="2026-01-25 00:15:30.533373091 +0000 UTC m=+369.766363531" observedRunningTime="2026-01-25 00:15:31.183107297 +0000 UTC m=+370.416097737" watchObservedRunningTime="2026-01-25 00:15:31.18360395 +0000 UTC m=+370.416594390" Jan 25 00:15:32 crc kubenswrapper[4947]: I0125 00:15:32.127591 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g7982" event={"ID":"5e39c693-6291-4810-863e-fd3e5cd45fbc","Type":"ContainerStarted","Data":"497fc5e2192eac91e48279f7de5d750ddcc29f1d25f92e38e4fb7244e602915a"} Jan 25 00:15:32 crc kubenswrapper[4947]: I0125 00:15:32.129937 4947 generic.go:334] "Generic (PLEG): container finished" podID="e8adfaf1-4e17-430c-970e-1cbf2e58c18a" containerID="45b6caad89697237a873ac3dd397ab2e0d67a08d04b447395ae4e0b4fcea4215" exitCode=0 Jan 25 00:15:32 crc kubenswrapper[4947]: I0125 00:15:32.130885 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m2ddl" event={"ID":"e8adfaf1-4e17-430c-970e-1cbf2e58c18a","Type":"ContainerDied","Data":"45b6caad89697237a873ac3dd397ab2e0d67a08d04b447395ae4e0b4fcea4215"} Jan 25 00:15:33 crc kubenswrapper[4947]: I0125 00:15:33.137997 4947 generic.go:334] "Generic (PLEG): container finished" podID="5e39c693-6291-4810-863e-fd3e5cd45fbc" containerID="497fc5e2192eac91e48279f7de5d750ddcc29f1d25f92e38e4fb7244e602915a" exitCode=0 Jan 25 00:15:33 crc kubenswrapper[4947]: I0125 00:15:33.138079 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g7982" event={"ID":"5e39c693-6291-4810-863e-fd3e5cd45fbc","Type":"ContainerDied","Data":"497fc5e2192eac91e48279f7de5d750ddcc29f1d25f92e38e4fb7244e602915a"} Jan 25 00:15:33 crc kubenswrapper[4947]: I0125 00:15:33.141792 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m2ddl" event={"ID":"e8adfaf1-4e17-430c-970e-1cbf2e58c18a","Type":"ContainerStarted","Data":"e8b3d19625d955735254aa8987188f61e2f83fdd6e6b4484871567db3765741a"} Jan 25 00:15:33 crc kubenswrapper[4947]: I0125 00:15:33.179487 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m2ddl" podStartSLOduration=1.694324854 podStartE2EDuration="4.179463282s" podCreationTimestamp="2026-01-25 00:15:29 +0000 UTC" firstStartedPulling="2026-01-25 00:15:30.084411706 +0000 UTC m=+369.317402166" lastFinishedPulling="2026-01-25 00:15:32.569550154 +0000 UTC m=+371.802540594" observedRunningTime="2026-01-25 00:15:33.176340832 +0000 UTC m=+372.409331272" watchObservedRunningTime="2026-01-25 00:15:33.179463282 +0000 UTC m=+372.412453732" Jan 25 00:15:35 crc kubenswrapper[4947]: I0125 00:15:35.160035 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g7982" event={"ID":"5e39c693-6291-4810-863e-fd3e5cd45fbc","Type":"ContainerStarted","Data":"c252e5c047622c2b6848df213b513c8bfc163d407948dc851411ef8c8d007320"} Jan 25 00:15:35 crc kubenswrapper[4947]: I0125 00:15:35.189606 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g7982" podStartSLOduration=2.6829224 podStartE2EDuration="5.189578163s" podCreationTimestamp="2026-01-25 00:15:30 +0000 UTC" firstStartedPulling="2026-01-25 00:15:31.126322542 +0000 UTC m=+370.359312982" lastFinishedPulling="2026-01-25 00:15:33.632978305 +0000 UTC m=+372.865968745" observedRunningTime="2026-01-25 00:15:35.180394965 +0000 UTC m=+374.413385425" watchObservedRunningTime="2026-01-25 00:15:35.189578163 +0000 UTC m=+374.422568603" Jan 25 00:15:36 crc kubenswrapper[4947]: I0125 00:15:36.978182 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lkvvh" Jan 25 00:15:36 crc kubenswrapper[4947]: I0125 00:15:36.978542 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lkvvh" Jan 25 00:15:37 crc kubenswrapper[4947]: I0125 00:15:37.038740 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lkvvh" Jan 25 00:15:37 crc kubenswrapper[4947]: I0125 00:15:37.235307 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lkvvh" Jan 25 00:15:37 crc kubenswrapper[4947]: I0125 00:15:37.955213 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hwxx4" Jan 25 00:15:37 crc kubenswrapper[4947]: I0125 00:15:37.955302 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hwxx4" Jan 25 00:15:38 crc kubenswrapper[4947]: I0125 00:15:38.008571 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hwxx4" Jan 25 00:15:38 crc kubenswrapper[4947]: I0125 00:15:38.242934 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hwxx4" Jan 25 00:15:39 crc kubenswrapper[4947]: I0125 00:15:39.364267 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m2ddl" Jan 25 00:15:39 crc kubenswrapper[4947]: I0125 00:15:39.364353 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m2ddl" Jan 25 00:15:39 crc kubenswrapper[4947]: I0125 00:15:39.398147 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m2ddl" Jan 25 00:15:40 crc kubenswrapper[4947]: I0125 00:15:40.241906 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m2ddl" Jan 25 00:15:40 crc kubenswrapper[4947]: I0125 00:15:40.354221 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g7982" Jan 25 00:15:40 crc kubenswrapper[4947]: I0125 00:15:40.354263 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g7982" Jan 25 00:15:40 crc kubenswrapper[4947]: I0125 00:15:40.392034 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g7982" Jan 25 00:15:41 crc kubenswrapper[4947]: I0125 00:15:41.245057 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g7982" Jan 25 00:15:45 crc kubenswrapper[4947]: I0125 00:15:45.585327 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc"] Jan 25 00:15:45 crc kubenswrapper[4947]: I0125 00:15:45.585904 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" podUID="02b69aab-7cbd-4f58-8756-c1c5b615c33d" containerName="controller-manager" containerID="cri-o://d0735a74ff405108e89ba77cc47ad07d34a08e6bc5aebf4ae849139038dfd8ea" gracePeriod=30 Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.072434 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.072827 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.072886 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.073754 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6a30b7c64683957a95e052711b4800e7471651a4fd592c025524ce839a16c59c"} pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.073847 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" containerID="cri-o://6a30b7c64683957a95e052711b4800e7471651a4fd592c025524ce839a16c59c" gracePeriod=600 Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.813849 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.847480 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6cb4bd4595-z98df"] Jan 25 00:15:47 crc kubenswrapper[4947]: E0125 00:15:47.847701 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02b69aab-7cbd-4f58-8756-c1c5b615c33d" containerName="controller-manager" Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.847712 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="02b69aab-7cbd-4f58-8756-c1c5b615c33d" containerName="controller-manager" Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.847809 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="02b69aab-7cbd-4f58-8756-c1c5b615c33d" containerName="controller-manager" Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.848176 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.859329 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cb4bd4595-z98df"] Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.900662 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02b69aab-7cbd-4f58-8756-c1c5b615c33d-proxy-ca-bundles\") pod \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.900737 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02b69aab-7cbd-4f58-8756-c1c5b615c33d-config\") pod \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.900761 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02b69aab-7cbd-4f58-8756-c1c5b615c33d-client-ca\") pod \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.900811 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02b69aab-7cbd-4f58-8756-c1c5b615c33d-serving-cert\") pod \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.900843 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mscq\" (UniqueName: \"kubernetes.io/projected/02b69aab-7cbd-4f58-8756-c1c5b615c33d-kube-api-access-8mscq\") pod \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.901095 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e897be71-7c54-40b1-a607-b102af1b8a61-serving-cert\") pod \"controller-manager-6cb4bd4595-z98df\" (UID: \"e897be71-7c54-40b1-a607-b102af1b8a61\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.901144 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e897be71-7c54-40b1-a607-b102af1b8a61-client-ca\") pod \"controller-manager-6cb4bd4595-z98df\" (UID: \"e897be71-7c54-40b1-a607-b102af1b8a61\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.901191 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e897be71-7c54-40b1-a607-b102af1b8a61-config\") pod \"controller-manager-6cb4bd4595-z98df\" (UID: \"e897be71-7c54-40b1-a607-b102af1b8a61\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.901240 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e897be71-7c54-40b1-a607-b102af1b8a61-proxy-ca-bundles\") pod \"controller-manager-6cb4bd4595-z98df\" (UID: \"e897be71-7c54-40b1-a607-b102af1b8a61\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.901275 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swdzm\" (UniqueName: \"kubernetes.io/projected/e897be71-7c54-40b1-a607-b102af1b8a61-kube-api-access-swdzm\") pod \"controller-manager-6cb4bd4595-z98df\" (UID: \"e897be71-7c54-40b1-a607-b102af1b8a61\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.902412 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02b69aab-7cbd-4f58-8756-c1c5b615c33d-client-ca" (OuterVolumeSpecName: "client-ca") pod "02b69aab-7cbd-4f58-8756-c1c5b615c33d" (UID: "02b69aab-7cbd-4f58-8756-c1c5b615c33d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.902568 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02b69aab-7cbd-4f58-8756-c1c5b615c33d-config" (OuterVolumeSpecName: "config") pod "02b69aab-7cbd-4f58-8756-c1c5b615c33d" (UID: "02b69aab-7cbd-4f58-8756-c1c5b615c33d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.902621 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02b69aab-7cbd-4f58-8756-c1c5b615c33d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "02b69aab-7cbd-4f58-8756-c1c5b615c33d" (UID: "02b69aab-7cbd-4f58-8756-c1c5b615c33d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.907660 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02b69aab-7cbd-4f58-8756-c1c5b615c33d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "02b69aab-7cbd-4f58-8756-c1c5b615c33d" (UID: "02b69aab-7cbd-4f58-8756-c1c5b615c33d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.911081 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02b69aab-7cbd-4f58-8756-c1c5b615c33d-kube-api-access-8mscq" (OuterVolumeSpecName: "kube-api-access-8mscq") pod "02b69aab-7cbd-4f58-8756-c1c5b615c33d" (UID: "02b69aab-7cbd-4f58-8756-c1c5b615c33d"). InnerVolumeSpecName "kube-api-access-8mscq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.002373 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e897be71-7c54-40b1-a607-b102af1b8a61-serving-cert\") pod \"controller-manager-6cb4bd4595-z98df\" (UID: \"e897be71-7c54-40b1-a607-b102af1b8a61\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.002679 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e897be71-7c54-40b1-a607-b102af1b8a61-client-ca\") pod \"controller-manager-6cb4bd4595-z98df\" (UID: \"e897be71-7c54-40b1-a607-b102af1b8a61\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.002713 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e897be71-7c54-40b1-a607-b102af1b8a61-config\") pod \"controller-manager-6cb4bd4595-z98df\" (UID: \"e897be71-7c54-40b1-a607-b102af1b8a61\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.002749 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e897be71-7c54-40b1-a607-b102af1b8a61-proxy-ca-bundles\") pod \"controller-manager-6cb4bd4595-z98df\" (UID: \"e897be71-7c54-40b1-a607-b102af1b8a61\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.002786 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swdzm\" (UniqueName: \"kubernetes.io/projected/e897be71-7c54-40b1-a607-b102af1b8a61-kube-api-access-swdzm\") pod \"controller-manager-6cb4bd4595-z98df\" (UID: \"e897be71-7c54-40b1-a607-b102af1b8a61\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.002833 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02b69aab-7cbd-4f58-8756-c1c5b615c33d-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.002847 4947 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02b69aab-7cbd-4f58-8756-c1c5b615c33d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.002858 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02b69aab-7cbd-4f58-8756-c1c5b615c33d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.002871 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mscq\" (UniqueName: \"kubernetes.io/projected/02b69aab-7cbd-4f58-8756-c1c5b615c33d-kube-api-access-8mscq\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.002884 4947 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02b69aab-7cbd-4f58-8756-c1c5b615c33d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.004044 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e897be71-7c54-40b1-a607-b102af1b8a61-proxy-ca-bundles\") pod \"controller-manager-6cb4bd4595-z98df\" (UID: \"e897be71-7c54-40b1-a607-b102af1b8a61\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.004100 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e897be71-7c54-40b1-a607-b102af1b8a61-client-ca\") pod \"controller-manager-6cb4bd4595-z98df\" (UID: \"e897be71-7c54-40b1-a607-b102af1b8a61\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.004254 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e897be71-7c54-40b1-a607-b102af1b8a61-config\") pod \"controller-manager-6cb4bd4595-z98df\" (UID: \"e897be71-7c54-40b1-a607-b102af1b8a61\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.006014 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e897be71-7c54-40b1-a607-b102af1b8a61-serving-cert\") pod \"controller-manager-6cb4bd4595-z98df\" (UID: \"e897be71-7c54-40b1-a607-b102af1b8a61\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.021184 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swdzm\" (UniqueName: \"kubernetes.io/projected/e897be71-7c54-40b1-a607-b102af1b8a61-kube-api-access-swdzm\") pod \"controller-manager-6cb4bd4595-z98df\" (UID: \"e897be71-7c54-40b1-a607-b102af1b8a61\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.168269 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.244449 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerID="6a30b7c64683957a95e052711b4800e7471651a4fd592c025524ce839a16c59c" exitCode=0 Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.244519 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerDied","Data":"6a30b7c64683957a95e052711b4800e7471651a4fd592c025524ce839a16c59c"} Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.244608 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerStarted","Data":"3aad7ba55a48c39182daae557e3aa5f13d9797554883516debfe73624ee84ab3"} Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.244637 4947 scope.go:117] "RemoveContainer" containerID="d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.249366 4947 generic.go:334] "Generic (PLEG): container finished" podID="02b69aab-7cbd-4f58-8756-c1c5b615c33d" containerID="d0735a74ff405108e89ba77cc47ad07d34a08e6bc5aebf4ae849139038dfd8ea" exitCode=0 Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.249708 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.250177 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" event={"ID":"02b69aab-7cbd-4f58-8756-c1c5b615c33d","Type":"ContainerDied","Data":"d0735a74ff405108e89ba77cc47ad07d34a08e6bc5aebf4ae849139038dfd8ea"} Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.250205 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" event={"ID":"02b69aab-7cbd-4f58-8756-c1c5b615c33d","Type":"ContainerDied","Data":"38a5c9cb609e714547362a4cc611ea212282d2141fd9223ceadf053b5b3b5a11"} Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.288967 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc"] Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.294771 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc"] Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.585197 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cb4bd4595-z98df"] Jan 25 00:15:48 crc kubenswrapper[4947]: W0125 00:15:48.601246 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode897be71_7c54_40b1_a607_b102af1b8a61.slice/crio-912113a1bd16a6e817afb1d4e20222dd42d036dbe9bdba110f58925bbe4f61e7 WatchSource:0}: Error finding container 912113a1bd16a6e817afb1d4e20222dd42d036dbe9bdba110f58925bbe4f61e7: Status 404 returned error can't find the container with id 912113a1bd16a6e817afb1d4e20222dd42d036dbe9bdba110f58925bbe4f61e7 Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.622204 4947 patch_prober.go:28] interesting pod/controller-manager-5b9d4449c6-5zzzc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.622401 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" podUID="02b69aab-7cbd-4f58-8756-c1c5b615c33d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.895155 4947 scope.go:117] "RemoveContainer" containerID="d0735a74ff405108e89ba77cc47ad07d34a08e6bc5aebf4ae849139038dfd8ea" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.912525 4947 scope.go:117] "RemoveContainer" containerID="d0735a74ff405108e89ba77cc47ad07d34a08e6bc5aebf4ae849139038dfd8ea" Jan 25 00:15:48 crc kubenswrapper[4947]: E0125 00:15:48.913018 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0735a74ff405108e89ba77cc47ad07d34a08e6bc5aebf4ae849139038dfd8ea\": container with ID starting with d0735a74ff405108e89ba77cc47ad07d34a08e6bc5aebf4ae849139038dfd8ea not found: ID does not exist" containerID="d0735a74ff405108e89ba77cc47ad07d34a08e6bc5aebf4ae849139038dfd8ea" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.913058 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0735a74ff405108e89ba77cc47ad07d34a08e6bc5aebf4ae849139038dfd8ea"} err="failed to get container status \"d0735a74ff405108e89ba77cc47ad07d34a08e6bc5aebf4ae849139038dfd8ea\": rpc error: code = NotFound desc = could not find container \"d0735a74ff405108e89ba77cc47ad07d34a08e6bc5aebf4ae849139038dfd8ea\": container with ID starting with d0735a74ff405108e89ba77cc47ad07d34a08e6bc5aebf4ae849139038dfd8ea not found: ID does not exist" Jan 25 00:15:49 crc kubenswrapper[4947]: I0125 00:15:49.096959 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02b69aab-7cbd-4f58-8756-c1c5b615c33d" path="/var/lib/kubelet/pods/02b69aab-7cbd-4f58-8756-c1c5b615c33d/volumes" Jan 25 00:15:49 crc kubenswrapper[4947]: I0125 00:15:49.256216 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" event={"ID":"e897be71-7c54-40b1-a607-b102af1b8a61","Type":"ContainerStarted","Data":"efb9b9308bc76e855b422f0c911184e13b7958d025c2c710defa2cfdc8709b62"} Jan 25 00:15:49 crc kubenswrapper[4947]: I0125 00:15:49.256253 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" event={"ID":"e897be71-7c54-40b1-a607-b102af1b8a61","Type":"ContainerStarted","Data":"912113a1bd16a6e817afb1d4e20222dd42d036dbe9bdba110f58925bbe4f61e7"} Jan 25 00:15:49 crc kubenswrapper[4947]: I0125 00:15:49.256381 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" Jan 25 00:15:49 crc kubenswrapper[4947]: I0125 00:15:49.261030 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" Jan 25 00:15:49 crc kubenswrapper[4947]: I0125 00:15:49.279188 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" podStartSLOduration=4.279172229 podStartE2EDuration="4.279172229s" podCreationTimestamp="2026-01-25 00:15:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:15:49.276062039 +0000 UTC m=+388.509052479" watchObservedRunningTime="2026-01-25 00:15:49.279172229 +0000 UTC m=+388.512162669" Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.078651 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" podUID="ce1b6238-9a41-4472-accc-e4d7d6371357" containerName="registry" containerID="cri-o://51f2c364bfae060665da042ae7dc21f336f532bdfa00072378e4e19dbb303585" gracePeriod=30 Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.323243 4947 generic.go:334] "Generic (PLEG): container finished" podID="ce1b6238-9a41-4472-accc-e4d7d6371357" containerID="51f2c364bfae060665da042ae7dc21f336f532bdfa00072378e4e19dbb303585" exitCode=0 Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.323298 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" event={"ID":"ce1b6238-9a41-4472-accc-e4d7d6371357","Type":"ContainerDied","Data":"51f2c364bfae060665da042ae7dc21f336f532bdfa00072378e4e19dbb303585"} Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.633023 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.723375 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ce1b6238-9a41-4472-accc-e4d7d6371357-registry-tls\") pod \"ce1b6238-9a41-4472-accc-e4d7d6371357\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.724023 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ce1b6238-9a41-4472-accc-e4d7d6371357-installation-pull-secrets\") pod \"ce1b6238-9a41-4472-accc-e4d7d6371357\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.724271 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ww4b6\" (UniqueName: \"kubernetes.io/projected/ce1b6238-9a41-4472-accc-e4d7d6371357-kube-api-access-ww4b6\") pod \"ce1b6238-9a41-4472-accc-e4d7d6371357\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.724541 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ce1b6238-9a41-4472-accc-e4d7d6371357-ca-trust-extracted\") pod \"ce1b6238-9a41-4472-accc-e4d7d6371357\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.724788 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ce1b6238-9a41-4472-accc-e4d7d6371357-registry-certificates\") pod \"ce1b6238-9a41-4472-accc-e4d7d6371357\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.725356 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"ce1b6238-9a41-4472-accc-e4d7d6371357\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.725765 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce1b6238-9a41-4472-accc-e4d7d6371357-trusted-ca\") pod \"ce1b6238-9a41-4472-accc-e4d7d6371357\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.726020 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ce1b6238-9a41-4472-accc-e4d7d6371357-bound-sa-token\") pod \"ce1b6238-9a41-4472-accc-e4d7d6371357\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.725822 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce1b6238-9a41-4472-accc-e4d7d6371357-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "ce1b6238-9a41-4472-accc-e4d7d6371357" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.726665 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce1b6238-9a41-4472-accc-e4d7d6371357-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "ce1b6238-9a41-4472-accc-e4d7d6371357" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.727044 4947 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ce1b6238-9a41-4472-accc-e4d7d6371357-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.729275 4947 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce1b6238-9a41-4472-accc-e4d7d6371357-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.729904 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce1b6238-9a41-4472-accc-e4d7d6371357-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "ce1b6238-9a41-4472-accc-e4d7d6371357" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.730178 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce1b6238-9a41-4472-accc-e4d7d6371357-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "ce1b6238-9a41-4472-accc-e4d7d6371357" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.730322 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce1b6238-9a41-4472-accc-e4d7d6371357-kube-api-access-ww4b6" (OuterVolumeSpecName: "kube-api-access-ww4b6") pod "ce1b6238-9a41-4472-accc-e4d7d6371357" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357"). InnerVolumeSpecName "kube-api-access-ww4b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.732410 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce1b6238-9a41-4472-accc-e4d7d6371357-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "ce1b6238-9a41-4472-accc-e4d7d6371357" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.740272 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "ce1b6238-9a41-4472-accc-e4d7d6371357" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.746246 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce1b6238-9a41-4472-accc-e4d7d6371357-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "ce1b6238-9a41-4472-accc-e4d7d6371357" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.831189 4947 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ce1b6238-9a41-4472-accc-e4d7d6371357-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.831231 4947 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ce1b6238-9a41-4472-accc-e4d7d6371357-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.831246 4947 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ce1b6238-9a41-4472-accc-e4d7d6371357-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.831261 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ww4b6\" (UniqueName: \"kubernetes.io/projected/ce1b6238-9a41-4472-accc-e4d7d6371357-kube-api-access-ww4b6\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.831273 4947 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ce1b6238-9a41-4472-accc-e4d7d6371357-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:57 crc kubenswrapper[4947]: I0125 00:15:57.334549 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" event={"ID":"ce1b6238-9a41-4472-accc-e4d7d6371357","Type":"ContainerDied","Data":"8aa2ec1702299cb0f2f7ebe9da84ffc79ac7ec1919bcb49ddb3c081345236f17"} Jan 25 00:15:57 crc kubenswrapper[4947]: I0125 00:15:57.334631 4947 scope.go:117] "RemoveContainer" containerID="51f2c364bfae060665da042ae7dc21f336f532bdfa00072378e4e19dbb303585" Jan 25 00:15:57 crc kubenswrapper[4947]: I0125 00:15:57.334642 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:15:57 crc kubenswrapper[4947]: I0125 00:15:57.365810 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mprs4"] Jan 25 00:15:57 crc kubenswrapper[4947]: I0125 00:15:57.373997 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mprs4"] Jan 25 00:15:59 crc kubenswrapper[4947]: I0125 00:15:59.101505 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce1b6238-9a41-4472-accc-e4d7d6371357" path="/var/lib/kubelet/pods/ce1b6238-9a41-4472-accc-e4d7d6371357/volumes" Jan 25 00:17:47 crc kubenswrapper[4947]: I0125 00:17:47.072771 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:17:47 crc kubenswrapper[4947]: I0125 00:17:47.073508 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:18:17 crc kubenswrapper[4947]: I0125 00:18:17.073604 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:18:17 crc kubenswrapper[4947]: I0125 00:18:17.074285 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:18:47 crc kubenswrapper[4947]: I0125 00:18:47.072624 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:18:47 crc kubenswrapper[4947]: I0125 00:18:47.073719 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:18:47 crc kubenswrapper[4947]: I0125 00:18:47.073792 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:18:47 crc kubenswrapper[4947]: I0125 00:18:47.074873 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3aad7ba55a48c39182daae557e3aa5f13d9797554883516debfe73624ee84ab3"} pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 25 00:18:47 crc kubenswrapper[4947]: I0125 00:18:47.075004 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" containerID="cri-o://3aad7ba55a48c39182daae557e3aa5f13d9797554883516debfe73624ee84ab3" gracePeriod=600 Jan 25 00:18:47 crc kubenswrapper[4947]: I0125 00:18:47.454242 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerID="3aad7ba55a48c39182daae557e3aa5f13d9797554883516debfe73624ee84ab3" exitCode=0 Jan 25 00:18:47 crc kubenswrapper[4947]: I0125 00:18:47.454336 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerDied","Data":"3aad7ba55a48c39182daae557e3aa5f13d9797554883516debfe73624ee84ab3"} Jan 25 00:18:47 crc kubenswrapper[4947]: I0125 00:18:47.454721 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerStarted","Data":"f64176709d5607e82072c383022bad7109ba8a5dcfa7f1ccb95f13edf0e8c935"} Jan 25 00:18:47 crc kubenswrapper[4947]: I0125 00:18:47.454816 4947 scope.go:117] "RemoveContainer" containerID="6a30b7c64683957a95e052711b4800e7471651a4fd592c025524ce839a16c59c" Jan 25 00:19:21 crc kubenswrapper[4947]: I0125 00:19:21.472370 4947 scope.go:117] "RemoveContainer" containerID="87af76d9cedf9765995fdba192251417d6cb96cc8dbaac5f8d89ebd77523cb24" Jan 25 00:20:18 crc kubenswrapper[4947]: I0125 00:20:18.617576 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fvfwz"] Jan 25 00:20:18 crc kubenswrapper[4947]: I0125 00:20:18.618935 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovn-controller" containerID="cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a" gracePeriod=30 Jan 25 00:20:18 crc kubenswrapper[4947]: I0125 00:20:18.619174 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="sbdb" containerID="cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3" gracePeriod=30 Jan 25 00:20:18 crc kubenswrapper[4947]: I0125 00:20:18.619216 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="nbdb" containerID="cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a" gracePeriod=30 Jan 25 00:20:18 crc kubenswrapper[4947]: I0125 00:20:18.619251 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="northd" containerID="cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f" gracePeriod=30 Jan 25 00:20:18 crc kubenswrapper[4947]: I0125 00:20:18.619285 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d" gracePeriod=30 Jan 25 00:20:18 crc kubenswrapper[4947]: I0125 00:20:18.619316 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="kube-rbac-proxy-node" containerID="cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53" gracePeriod=30 Jan 25 00:20:18 crc kubenswrapper[4947]: I0125 00:20:18.619358 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovn-acl-logging" containerID="cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7" gracePeriod=30 Jan 25 00:20:18 crc kubenswrapper[4947]: I0125 00:20:18.659902 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovnkube-controller" containerID="cri-o://c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c" gracePeriod=30 Jan 25 00:20:18 crc kubenswrapper[4947]: I0125 00:20:18.960209 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovnkube-controller/3.log" Jan 25 00:20:18 crc kubenswrapper[4947]: I0125 00:20:18.962954 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovn-acl-logging/0.log" Jan 25 00:20:18 crc kubenswrapper[4947]: I0125 00:20:18.963855 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovn-controller/0.log" Jan 25 00:20:18 crc kubenswrapper[4947]: I0125 00:20:18.964380 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008157 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-log-socket\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008209 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-cni-bin\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008238 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-run-openvswitch\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008267 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8bf5f940-5287-40f1-b208-535cdfcb0054-ovnkube-script-lib\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008287 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-slash\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008307 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-run-netns\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008328 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-cni-netd\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008348 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-run-ovn-kubernetes\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008367 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-run-ovn\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008388 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8bf5f940-5287-40f1-b208-535cdfcb0054-env-overrides\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008465 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-kubelet\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008498 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8bf5f940-5287-40f1-b208-535cdfcb0054-ovnkube-config\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008526 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-run-systemd\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008544 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-var-lib-openvswitch\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008570 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-var-lib-cni-networks-ovn-kubernetes\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008568 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-slash" (OuterVolumeSpecName: "host-slash") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008596 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8bf5f940-5287-40f1-b208-535cdfcb0054-ovn-node-metrics-cert\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008643 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008720 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-etc-openvswitch\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008686 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-log-socket" (OuterVolumeSpecName: "log-socket") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008754 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008811 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-node-log" (OuterVolumeSpecName: "node-log") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008832 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008787 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-node-log\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008866 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008895 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008935 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-systemd-units\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.009040 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh6bp\" (UniqueName: \"kubernetes.io/projected/8bf5f940-5287-40f1-b208-535cdfcb0054-kube-api-access-xh6bp\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.009059 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.009185 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.009387 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.009537 4947 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.009574 4947 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-node-log\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.009601 4947 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-log-socket\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.009627 4947 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.009674 4947 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.009700 4947 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-slash\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.009726 4947 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.009501 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.009587 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.009636 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.009671 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bf5f940-5287-40f1-b208-535cdfcb0054-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.009823 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bf5f940-5287-40f1-b208-535cdfcb0054-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.010609 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bf5f940-5287-40f1-b208-535cdfcb0054-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.015648 4947 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.015726 4947 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.020295 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bf5f940-5287-40f1-b208-535cdfcb0054-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.025471 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bf5f940-5287-40f1-b208-535cdfcb0054-kube-api-access-xh6bp" (OuterVolumeSpecName: "kube-api-access-xh6bp") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "kube-api-access-xh6bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.031273 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.056657 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rqxpr"] Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.057072 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovn-acl-logging" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057106 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovn-acl-logging" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.057157 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="sbdb" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057172 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="sbdb" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.057205 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovnkube-controller" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057221 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovnkube-controller" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.057243 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce1b6238-9a41-4472-accc-e4d7d6371357" containerName="registry" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057256 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce1b6238-9a41-4472-accc-e4d7d6371357" containerName="registry" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.057273 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovnkube-controller" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057286 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovnkube-controller" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.057307 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="kube-rbac-proxy-node" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057320 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="kube-rbac-proxy-node" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.057336 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovnkube-controller" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057351 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovnkube-controller" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.057368 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovn-controller" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057381 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovn-controller" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.057400 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="northd" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057413 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="northd" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.057428 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="nbdb" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057444 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="nbdb" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.057462 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovnkube-controller" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057475 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovnkube-controller" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.057492 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="kubecfg-setup" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057505 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="kubecfg-setup" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.057522 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="kube-rbac-proxy-ovn-metrics" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057538 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="kube-rbac-proxy-ovn-metrics" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.057558 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovnkube-controller" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057572 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovnkube-controller" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057770 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="kube-rbac-proxy-ovn-metrics" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057798 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovnkube-controller" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057822 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovnkube-controller" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057836 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovn-controller" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057854 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovnkube-controller" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057870 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovn-acl-logging" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057885 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="northd" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057903 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce1b6238-9a41-4472-accc-e4d7d6371357" containerName="registry" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057922 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="kube-rbac-proxy-node" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057940 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="sbdb" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057958 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="nbdb" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.058398 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovnkube-controller" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.058423 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovnkube-controller" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.065652 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.083330 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovnkube-controller/3.log" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.086406 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovn-acl-logging/0.log" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.087093 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovn-controller/0.log" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.087702 4947 generic.go:334] "Generic (PLEG): container finished" podID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerID="c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c" exitCode=0 Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.087755 4947 generic.go:334] "Generic (PLEG): container finished" podID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerID="f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3" exitCode=0 Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.087771 4947 generic.go:334] "Generic (PLEG): container finished" podID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerID="f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a" exitCode=0 Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.087787 4947 generic.go:334] "Generic (PLEG): container finished" podID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerID="ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f" exitCode=0 Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.087801 4947 generic.go:334] "Generic (PLEG): container finished" podID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerID="dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d" exitCode=0 Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.087795 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerDied","Data":"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.087874 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerDied","Data":"f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.087890 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.087914 4947 scope.go:117] "RemoveContainer" containerID="c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.087817 4947 generic.go:334] "Generic (PLEG): container finished" podID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerID="8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53" exitCode=0 Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.087965 4947 generic.go:334] "Generic (PLEG): container finished" podID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerID="dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7" exitCode=143 Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.087981 4947 generic.go:334] "Generic (PLEG): container finished" podID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerID="d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a" exitCode=143 Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.087894 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerDied","Data":"f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088180 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerDied","Data":"ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088209 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerDied","Data":"dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088230 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerDied","Data":"8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088252 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088276 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088289 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088301 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088313 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088324 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088334 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088345 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088356 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088370 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerDied","Data":"dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088390 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088403 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088414 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088427 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088438 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088449 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088462 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088472 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088483 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088494 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088510 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerDied","Data":"d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088526 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088539 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088552 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088564 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088610 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088621 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088631 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088642 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.089734 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.089746 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.091772 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9fspn_2d914454-2c17-47f2-aa53-aba3bfaad296/kube-multus/2.log" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.101497 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9fspn_2d914454-2c17-47f2-aa53-aba3bfaad296/kube-multus/1.log" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.101571 4947 generic.go:334] "Generic (PLEG): container finished" podID="2d914454-2c17-47f2-aa53-aba3bfaad296" containerID="c7e4bdd1edd4881050f59041c1d6812e03c6fe683d1104b228fa6e3da5f11032" exitCode=2 Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.115663 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerDied","Data":"ab91914d18e527be722f5e70489e90096dc0e627d44b69e63be506f96778e303"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.115727 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.115753 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.115763 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.115775 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.115784 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.115792 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.115800 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.115812 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.115821 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.115829 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.115853 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9fspn" event={"ID":"2d914454-2c17-47f2-aa53-aba3bfaad296","Type":"ContainerDied","Data":"c7e4bdd1edd4881050f59041c1d6812e03c6fe683d1104b228fa6e3da5f11032"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.115873 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6c0ba6afdf615533c6d174c24f3d76e5070b4436d09dbe07c9a6c595203d5470"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.116408 4947 scope.go:117] "RemoveContainer" containerID="c7e4bdd1edd4881050f59041c1d6812e03c6fe683d1104b228fa6e3da5f11032" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.116690 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-9fspn_openshift-multus(2d914454-2c17-47f2-aa53-aba3bfaad296)\"" pod="openshift-multus/multus-9fspn" podUID="2d914454-2c17-47f2-aa53-aba3bfaad296" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.118454 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-kubelet\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.118525 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-cni-netd\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.118841 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-var-lib-openvswitch\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.119003 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-etc-openvswitch\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.119028 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-env-overrides\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.120394 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-systemd-units\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.120629 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-run-ovn\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.120758 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-node-log\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.120808 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-ovn-node-metrics-cert\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.120929 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-ovnkube-config\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.121003 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmcj5\" (UniqueName: \"kubernetes.io/projected/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-kube-api-access-bmcj5\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.121030 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-run-systemd\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.121097 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-run-ovn-kubernetes\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.121156 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.121192 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-run-netns\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.121243 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-run-openvswitch\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.121277 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-slash\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.121337 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-log-socket\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.121363 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-cni-bin\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.122354 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-ovnkube-script-lib\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.122525 4947 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8bf5f940-5287-40f1-b208-535cdfcb0054-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.122542 4947 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.122555 4947 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.122568 4947 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.122579 4947 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8bf5f940-5287-40f1-b208-535cdfcb0054-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.122588 4947 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.122598 4947 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8bf5f940-5287-40f1-b208-535cdfcb0054-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.122609 4947 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.122622 4947 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8bf5f940-5287-40f1-b208-535cdfcb0054-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.122638 4947 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.122652 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xh6bp\" (UniqueName: \"kubernetes.io/projected/8bf5f940-5287-40f1-b208-535cdfcb0054-kube-api-access-xh6bp\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.136616 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fvfwz"] Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.138698 4947 scope.go:117] "RemoveContainer" containerID="a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.144089 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fvfwz"] Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.180972 4947 scope.go:117] "RemoveContainer" containerID="f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.199043 4947 scope.go:117] "RemoveContainer" containerID="f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.218616 4947 scope.go:117] "RemoveContainer" containerID="ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224094 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-run-ovn-kubernetes\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224165 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224214 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-run-netns\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224236 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-run-openvswitch\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224236 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-run-ovn-kubernetes\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224276 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-slash\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224309 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-log-socket\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224308 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-run-netns\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224351 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224355 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-cni-bin\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224405 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-cni-bin\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224413 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-ovnkube-script-lib\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224481 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-kubelet\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224526 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-cni-netd\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224559 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-var-lib-openvswitch\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224600 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-etc-openvswitch\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224689 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-run-openvswitch\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224739 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-var-lib-openvswitch\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224750 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-kubelet\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224767 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-cni-netd\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224853 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-log-socket\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224896 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-etc-openvswitch\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224963 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-slash\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.225085 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-env-overrides\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.225116 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-systemd-units\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.225170 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-run-ovn\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.225228 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-systemd-units\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.225277 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-ovn-node-metrics-cert\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.225326 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-run-ovn\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.225339 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-ovnkube-script-lib\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.225369 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-node-log\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.225400 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-node-log\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.225435 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-ovnkube-config\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.226204 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-ovnkube-config\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.226204 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmcj5\" (UniqueName: \"kubernetes.io/projected/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-kube-api-access-bmcj5\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.226240 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-env-overrides\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.226283 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-run-systemd\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.226262 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-run-systemd\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.230434 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-ovn-node-metrics-cert\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.238110 4947 scope.go:117] "RemoveContainer" containerID="dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.247369 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmcj5\" (UniqueName: \"kubernetes.io/projected/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-kube-api-access-bmcj5\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.258978 4947 scope.go:117] "RemoveContainer" containerID="8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.280521 4947 scope.go:117] "RemoveContainer" containerID="dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.302479 4947 scope.go:117] "RemoveContainer" containerID="d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.322831 4947 scope.go:117] "RemoveContainer" containerID="dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.360530 4947 scope.go:117] "RemoveContainer" containerID="c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.361401 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c\": container with ID starting with c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c not found: ID does not exist" containerID="c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.361500 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c"} err="failed to get container status \"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c\": rpc error: code = NotFound desc = could not find container \"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c\": container with ID starting with c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.361550 4947 scope.go:117] "RemoveContainer" containerID="a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.362281 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e\": container with ID starting with a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e not found: ID does not exist" containerID="a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.362377 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e"} err="failed to get container status \"a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e\": rpc error: code = NotFound desc = could not find container \"a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e\": container with ID starting with a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.362453 4947 scope.go:117] "RemoveContainer" containerID="f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.363021 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\": container with ID starting with f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3 not found: ID does not exist" containerID="f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.363110 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3"} err="failed to get container status \"f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\": rpc error: code = NotFound desc = could not find container \"f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\": container with ID starting with f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3 not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.363188 4947 scope.go:117] "RemoveContainer" containerID="f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.363748 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\": container with ID starting with f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a not found: ID does not exist" containerID="f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.363841 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a"} err="failed to get container status \"f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\": rpc error: code = NotFound desc = could not find container \"f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\": container with ID starting with f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.363913 4947 scope.go:117] "RemoveContainer" containerID="ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.364793 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\": container with ID starting with ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f not found: ID does not exist" containerID="ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.364868 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f"} err="failed to get container status \"ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\": rpc error: code = NotFound desc = could not find container \"ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\": container with ID starting with ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.364918 4947 scope.go:117] "RemoveContainer" containerID="dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.365647 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\": container with ID starting with dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d not found: ID does not exist" containerID="dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.365702 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d"} err="failed to get container status \"dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\": rpc error: code = NotFound desc = could not find container \"dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\": container with ID starting with dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.365730 4947 scope.go:117] "RemoveContainer" containerID="8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.366436 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\": container with ID starting with 8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53 not found: ID does not exist" containerID="8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.366659 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53"} err="failed to get container status \"8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\": rpc error: code = NotFound desc = could not find container \"8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\": container with ID starting with 8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53 not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.366779 4947 scope.go:117] "RemoveContainer" containerID="dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.367525 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\": container with ID starting with dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7 not found: ID does not exist" containerID="dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.367565 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7"} err="failed to get container status \"dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\": rpc error: code = NotFound desc = could not find container \"dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\": container with ID starting with dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7 not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.367588 4947 scope.go:117] "RemoveContainer" containerID="d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.368103 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\": container with ID starting with d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a not found: ID does not exist" containerID="d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.368216 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a"} err="failed to get container status \"d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\": rpc error: code = NotFound desc = could not find container \"d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\": container with ID starting with d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.368289 4947 scope.go:117] "RemoveContainer" containerID="dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.368988 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\": container with ID starting with dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1 not found: ID does not exist" containerID="dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.369055 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1"} err="failed to get container status \"dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\": rpc error: code = NotFound desc = could not find container \"dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\": container with ID starting with dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1 not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.369104 4947 scope.go:117] "RemoveContainer" containerID="c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.370286 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c"} err="failed to get container status \"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c\": rpc error: code = NotFound desc = could not find container \"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c\": container with ID starting with c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.370378 4947 scope.go:117] "RemoveContainer" containerID="a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.371081 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e"} err="failed to get container status \"a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e\": rpc error: code = NotFound desc = could not find container \"a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e\": container with ID starting with a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.371177 4947 scope.go:117] "RemoveContainer" containerID="f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.371692 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3"} err="failed to get container status \"f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\": rpc error: code = NotFound desc = could not find container \"f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\": container with ID starting with f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3 not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.371794 4947 scope.go:117] "RemoveContainer" containerID="f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.372296 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a"} err="failed to get container status \"f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\": rpc error: code = NotFound desc = could not find container \"f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\": container with ID starting with f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.372352 4947 scope.go:117] "RemoveContainer" containerID="ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.372705 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f"} err="failed to get container status \"ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\": rpc error: code = NotFound desc = could not find container \"ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\": container with ID starting with ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.372755 4947 scope.go:117] "RemoveContainer" containerID="dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.373324 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d"} err="failed to get container status \"dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\": rpc error: code = NotFound desc = could not find container \"dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\": container with ID starting with dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.373384 4947 scope.go:117] "RemoveContainer" containerID="8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.373933 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53"} err="failed to get container status \"8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\": rpc error: code = NotFound desc = could not find container \"8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\": container with ID starting with 8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53 not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.374065 4947 scope.go:117] "RemoveContainer" containerID="dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.374962 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7"} err="failed to get container status \"dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\": rpc error: code = NotFound desc = could not find container \"dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\": container with ID starting with dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7 not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.374998 4947 scope.go:117] "RemoveContainer" containerID="d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.375657 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a"} err="failed to get container status \"d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\": rpc error: code = NotFound desc = could not find container \"d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\": container with ID starting with d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.375742 4947 scope.go:117] "RemoveContainer" containerID="dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.376316 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1"} err="failed to get container status \"dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\": rpc error: code = NotFound desc = could not find container \"dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\": container with ID starting with dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1 not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.376355 4947 scope.go:117] "RemoveContainer" containerID="c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.377185 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c"} err="failed to get container status \"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c\": rpc error: code = NotFound desc = could not find container \"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c\": container with ID starting with c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.377290 4947 scope.go:117] "RemoveContainer" containerID="a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.377795 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e"} err="failed to get container status \"a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e\": rpc error: code = NotFound desc = could not find container \"a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e\": container with ID starting with a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.377831 4947 scope.go:117] "RemoveContainer" containerID="f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.378455 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3"} err="failed to get container status \"f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\": rpc error: code = NotFound desc = could not find container \"f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\": container with ID starting with f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3 not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.378558 4947 scope.go:117] "RemoveContainer" containerID="f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.379198 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a"} err="failed to get container status \"f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\": rpc error: code = NotFound desc = could not find container \"f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\": container with ID starting with f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.379233 4947 scope.go:117] "RemoveContainer" containerID="ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.379985 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f"} err="failed to get container status \"ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\": rpc error: code = NotFound desc = could not find container \"ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\": container with ID starting with ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.380045 4947 scope.go:117] "RemoveContainer" containerID="dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.380559 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d"} err="failed to get container status \"dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\": rpc error: code = NotFound desc = could not find container \"dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\": container with ID starting with dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.380593 4947 scope.go:117] "RemoveContainer" containerID="8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.388685 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53"} err="failed to get container status \"8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\": rpc error: code = NotFound desc = could not find container \"8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\": container with ID starting with 8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53 not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.388787 4947 scope.go:117] "RemoveContainer" containerID="dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.388876 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.390265 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7"} err="failed to get container status \"dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\": rpc error: code = NotFound desc = could not find container \"dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\": container with ID starting with dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7 not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.390326 4947 scope.go:117] "RemoveContainer" containerID="d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.391175 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a"} err="failed to get container status \"d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\": rpc error: code = NotFound desc = could not find container \"d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\": container with ID starting with d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.391260 4947 scope.go:117] "RemoveContainer" containerID="dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.391833 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1"} err="failed to get container status \"dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\": rpc error: code = NotFound desc = could not find container \"dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\": container with ID starting with dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1 not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.391885 4947 scope.go:117] "RemoveContainer" containerID="c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.392740 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c"} err="failed to get container status \"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c\": rpc error: code = NotFound desc = could not find container \"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c\": container with ID starting with c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.392775 4947 scope.go:117] "RemoveContainer" containerID="a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.393892 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e"} err="failed to get container status \"a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e\": rpc error: code = NotFound desc = could not find container \"a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e\": container with ID starting with a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.393982 4947 scope.go:117] "RemoveContainer" containerID="f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.394929 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3"} err="failed to get container status \"f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\": rpc error: code = NotFound desc = could not find container \"f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\": container with ID starting with f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3 not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.395031 4947 scope.go:117] "RemoveContainer" containerID="f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.395621 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a"} err="failed to get container status \"f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\": rpc error: code = NotFound desc = could not find container \"f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\": container with ID starting with f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.395658 4947 scope.go:117] "RemoveContainer" containerID="ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.396471 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f"} err="failed to get container status \"ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\": rpc error: code = NotFound desc = could not find container \"ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\": container with ID starting with ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.396517 4947 scope.go:117] "RemoveContainer" containerID="dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.397275 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d"} err="failed to get container status \"dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\": rpc error: code = NotFound desc = could not find container \"dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\": container with ID starting with dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.397314 4947 scope.go:117] "RemoveContainer" containerID="8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.397919 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53"} err="failed to get container status \"8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\": rpc error: code = NotFound desc = could not find container \"8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\": container with ID starting with 8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53 not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.397969 4947 scope.go:117] "RemoveContainer" containerID="dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.398528 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7"} err="failed to get container status \"dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\": rpc error: code = NotFound desc = could not find container \"dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\": container with ID starting with dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7 not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.398565 4947 scope.go:117] "RemoveContainer" containerID="d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.399191 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a"} err="failed to get container status \"d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\": rpc error: code = NotFound desc = could not find container \"d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\": container with ID starting with d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.399233 4947 scope.go:117] "RemoveContainer" containerID="dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.399663 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1"} err="failed to get container status \"dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\": rpc error: code = NotFound desc = could not find container \"dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\": container with ID starting with dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1 not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.399705 4947 scope.go:117] "RemoveContainer" containerID="c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.400159 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c"} err="failed to get container status \"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c\": rpc error: code = NotFound desc = could not find container \"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c\": container with ID starting with c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: W0125 00:20:19.425024 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26019576_c357_40d3_a0aa_8dcc2ad9d1fa.slice/crio-7add459e945719b87e9d9432e381e9c983c1be15297d09f282af04213e0a5f77 WatchSource:0}: Error finding container 7add459e945719b87e9d9432e381e9c983c1be15297d09f282af04213e0a5f77: Status 404 returned error can't find the container with id 7add459e945719b87e9d9432e381e9c983c1be15297d09f282af04213e0a5f77 Jan 25 00:20:20 crc kubenswrapper[4947]: I0125 00:20:20.117052 4947 generic.go:334] "Generic (PLEG): container finished" podID="26019576-c357-40d3-a0aa-8dcc2ad9d1fa" containerID="848d58f9bb4672b4ee2b5cc6cfc9cc94bf7b1a5819ab33ba7dbbe132658a0fbf" exitCode=0 Jan 25 00:20:20 crc kubenswrapper[4947]: I0125 00:20:20.117114 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" event={"ID":"26019576-c357-40d3-a0aa-8dcc2ad9d1fa","Type":"ContainerDied","Data":"848d58f9bb4672b4ee2b5cc6cfc9cc94bf7b1a5819ab33ba7dbbe132658a0fbf"} Jan 25 00:20:20 crc kubenswrapper[4947]: I0125 00:20:20.117213 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" event={"ID":"26019576-c357-40d3-a0aa-8dcc2ad9d1fa","Type":"ContainerStarted","Data":"7add459e945719b87e9d9432e381e9c983c1be15297d09f282af04213e0a5f77"} Jan 25 00:20:21 crc kubenswrapper[4947]: I0125 00:20:21.103842 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" path="/var/lib/kubelet/pods/8bf5f940-5287-40f1-b208-535cdfcb0054/volumes" Jan 25 00:20:21 crc kubenswrapper[4947]: I0125 00:20:21.140194 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" event={"ID":"26019576-c357-40d3-a0aa-8dcc2ad9d1fa","Type":"ContainerStarted","Data":"6c40e7aa0f5c145e7e67d7194919c662fb4abe38b3ca460972b8717bd1a79339"} Jan 25 00:20:21 crc kubenswrapper[4947]: I0125 00:20:21.140270 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" event={"ID":"26019576-c357-40d3-a0aa-8dcc2ad9d1fa","Type":"ContainerStarted","Data":"c22f3919103aef95055fd843f7ceaf97753d1b9c8c1a736f00131c3496454dbf"} Jan 25 00:20:21 crc kubenswrapper[4947]: I0125 00:20:21.140284 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" event={"ID":"26019576-c357-40d3-a0aa-8dcc2ad9d1fa","Type":"ContainerStarted","Data":"008717caf9281dc7b2d7b230e0e01d18dbb3014f90629d681345ede2636282ce"} Jan 25 00:20:21 crc kubenswrapper[4947]: I0125 00:20:21.140294 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" event={"ID":"26019576-c357-40d3-a0aa-8dcc2ad9d1fa","Type":"ContainerStarted","Data":"0019d2e51271ed6a4d947dcdb47b1fbce4e27d1c84e35bcd08294986f721ae9b"} Jan 25 00:20:21 crc kubenswrapper[4947]: I0125 00:20:21.140305 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" event={"ID":"26019576-c357-40d3-a0aa-8dcc2ad9d1fa","Type":"ContainerStarted","Data":"06d6f997cea4cb672b8347913be13e77780b233c528828b0a18a29a7153fc3e8"} Jan 25 00:20:21 crc kubenswrapper[4947]: I0125 00:20:21.140315 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" event={"ID":"26019576-c357-40d3-a0aa-8dcc2ad9d1fa","Type":"ContainerStarted","Data":"88bd81431345d87b3a9aa1e630bc8abfc3a6b6277d7b01990e79770a67584d34"} Jan 25 00:20:21 crc kubenswrapper[4947]: I0125 00:20:21.537069 4947 scope.go:117] "RemoveContainer" containerID="6c0ba6afdf615533c6d174c24f3d76e5070b4436d09dbe07c9a6c595203d5470" Jan 25 00:20:22 crc kubenswrapper[4947]: I0125 00:20:22.149200 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9fspn_2d914454-2c17-47f2-aa53-aba3bfaad296/kube-multus/2.log" Jan 25 00:20:24 crc kubenswrapper[4947]: I0125 00:20:24.168499 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" event={"ID":"26019576-c357-40d3-a0aa-8dcc2ad9d1fa","Type":"ContainerStarted","Data":"12471d0a883816fa7afeaf2a9acf8e7d331b957de65795c4b508903d395b655c"} Jan 25 00:20:26 crc kubenswrapper[4947]: I0125 00:20:26.212882 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" event={"ID":"26019576-c357-40d3-a0aa-8dcc2ad9d1fa","Type":"ContainerStarted","Data":"f5b59d180938d74af0d203991b99c2039627967b3e3ddb702b8b4cc8b954b401"} Jan 25 00:20:26 crc kubenswrapper[4947]: I0125 00:20:26.213317 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:26 crc kubenswrapper[4947]: I0125 00:20:26.213381 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:26 crc kubenswrapper[4947]: I0125 00:20:26.213393 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:26 crc kubenswrapper[4947]: I0125 00:20:26.256836 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:26 crc kubenswrapper[4947]: I0125 00:20:26.269051 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:26 crc kubenswrapper[4947]: I0125 00:20:26.281281 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" podStartSLOduration=7.281262903 podStartE2EDuration="7.281262903s" podCreationTimestamp="2026-01-25 00:20:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:20:26.276892567 +0000 UTC m=+665.509883027" watchObservedRunningTime="2026-01-25 00:20:26.281262903 +0000 UTC m=+665.514253343" Jan 25 00:20:30 crc kubenswrapper[4947]: I0125 00:20:30.090167 4947 scope.go:117] "RemoveContainer" containerID="c7e4bdd1edd4881050f59041c1d6812e03c6fe683d1104b228fa6e3da5f11032" Jan 25 00:20:30 crc kubenswrapper[4947]: E0125 00:20:30.091421 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-9fspn_openshift-multus(2d914454-2c17-47f2-aa53-aba3bfaad296)\"" pod="openshift-multus/multus-9fspn" podUID="2d914454-2c17-47f2-aa53-aba3bfaad296" Jan 25 00:20:44 crc kubenswrapper[4947]: I0125 00:20:44.090297 4947 scope.go:117] "RemoveContainer" containerID="c7e4bdd1edd4881050f59041c1d6812e03c6fe683d1104b228fa6e3da5f11032" Jan 25 00:20:45 crc kubenswrapper[4947]: I0125 00:20:45.347865 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9fspn_2d914454-2c17-47f2-aa53-aba3bfaad296/kube-multus/2.log" Jan 25 00:20:45 crc kubenswrapper[4947]: I0125 00:20:45.348179 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9fspn" event={"ID":"2d914454-2c17-47f2-aa53-aba3bfaad296","Type":"ContainerStarted","Data":"85d7a3877eb8acd8e754ee1a34752e543f29301ad8c351bf0c5981d15ee40ac6"} Jan 25 00:20:47 crc kubenswrapper[4947]: I0125 00:20:47.072816 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:20:47 crc kubenswrapper[4947]: I0125 00:20:47.073274 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:20:49 crc kubenswrapper[4947]: I0125 00:20:49.428503 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:21:17 crc kubenswrapper[4947]: I0125 00:21:17.072975 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:21:17 crc kubenswrapper[4947]: I0125 00:21:17.073596 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:21:29 crc kubenswrapper[4947]: I0125 00:21:29.487360 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hwxx4"] Jan 25 00:21:29 crc kubenswrapper[4947]: I0125 00:21:29.488167 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hwxx4" podUID="24ae0891-d29f-45fe-be30-a46f76a39dda" containerName="registry-server" containerID="cri-o://e8ba3522af6a9edf7ea36415a9ce61bd4e6daf8766d9510d0b02847715256d0e" gracePeriod=30 Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.414266 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hwxx4" Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.448745 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24ae0891-d29f-45fe-be30-a46f76a39dda-utilities\") pod \"24ae0891-d29f-45fe-be30-a46f76a39dda\" (UID: \"24ae0891-d29f-45fe-be30-a46f76a39dda\") " Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.448796 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24ae0891-d29f-45fe-be30-a46f76a39dda-catalog-content\") pod \"24ae0891-d29f-45fe-be30-a46f76a39dda\" (UID: \"24ae0891-d29f-45fe-be30-a46f76a39dda\") " Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.448852 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdqff\" (UniqueName: \"kubernetes.io/projected/24ae0891-d29f-45fe-be30-a46f76a39dda-kube-api-access-hdqff\") pod \"24ae0891-d29f-45fe-be30-a46f76a39dda\" (UID: \"24ae0891-d29f-45fe-be30-a46f76a39dda\") " Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.450692 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24ae0891-d29f-45fe-be30-a46f76a39dda-utilities" (OuterVolumeSpecName: "utilities") pod "24ae0891-d29f-45fe-be30-a46f76a39dda" (UID: "24ae0891-d29f-45fe-be30-a46f76a39dda"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.458335 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24ae0891-d29f-45fe-be30-a46f76a39dda-kube-api-access-hdqff" (OuterVolumeSpecName: "kube-api-access-hdqff") pod "24ae0891-d29f-45fe-be30-a46f76a39dda" (UID: "24ae0891-d29f-45fe-be30-a46f76a39dda"). InnerVolumeSpecName "kube-api-access-hdqff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.471092 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24ae0891-d29f-45fe-be30-a46f76a39dda-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24ae0891-d29f-45fe-be30-a46f76a39dda" (UID: "24ae0891-d29f-45fe-be30-a46f76a39dda"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.549972 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24ae0891-d29f-45fe-be30-a46f76a39dda-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.550009 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24ae0891-d29f-45fe-be30-a46f76a39dda-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.550024 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdqff\" (UniqueName: \"kubernetes.io/projected/24ae0891-d29f-45fe-be30-a46f76a39dda-kube-api-access-hdqff\") on node \"crc\" DevicePath \"\"" Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.667529 4947 generic.go:334] "Generic (PLEG): container finished" podID="24ae0891-d29f-45fe-be30-a46f76a39dda" containerID="e8ba3522af6a9edf7ea36415a9ce61bd4e6daf8766d9510d0b02847715256d0e" exitCode=0 Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.667583 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwxx4" event={"ID":"24ae0891-d29f-45fe-be30-a46f76a39dda","Type":"ContainerDied","Data":"e8ba3522af6a9edf7ea36415a9ce61bd4e6daf8766d9510d0b02847715256d0e"} Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.667614 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwxx4" event={"ID":"24ae0891-d29f-45fe-be30-a46f76a39dda","Type":"ContainerDied","Data":"aea55c0ecb2555c344ea84eb69f61a3ca31d0786e037f7a2f29477da1e97cbae"} Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.667640 4947 scope.go:117] "RemoveContainer" containerID="e8ba3522af6a9edf7ea36415a9ce61bd4e6daf8766d9510d0b02847715256d0e" Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.667784 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hwxx4" Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.685204 4947 scope.go:117] "RemoveContainer" containerID="24e22aa04c846740854e7340e2b14eef4110af3d206baaf96197adb243fb1870" Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.699947 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hwxx4"] Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.701642 4947 scope.go:117] "RemoveContainer" containerID="3afef73f532178d1f4ce7234f366893070291c212faf97513ba9f77386409c97" Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.706490 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hwxx4"] Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.722809 4947 scope.go:117] "RemoveContainer" containerID="e8ba3522af6a9edf7ea36415a9ce61bd4e6daf8766d9510d0b02847715256d0e" Jan 25 00:21:30 crc kubenswrapper[4947]: E0125 00:21:30.723169 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8ba3522af6a9edf7ea36415a9ce61bd4e6daf8766d9510d0b02847715256d0e\": container with ID starting with e8ba3522af6a9edf7ea36415a9ce61bd4e6daf8766d9510d0b02847715256d0e not found: ID does not exist" containerID="e8ba3522af6a9edf7ea36415a9ce61bd4e6daf8766d9510d0b02847715256d0e" Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.723204 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8ba3522af6a9edf7ea36415a9ce61bd4e6daf8766d9510d0b02847715256d0e"} err="failed to get container status \"e8ba3522af6a9edf7ea36415a9ce61bd4e6daf8766d9510d0b02847715256d0e\": rpc error: code = NotFound desc = could not find container \"e8ba3522af6a9edf7ea36415a9ce61bd4e6daf8766d9510d0b02847715256d0e\": container with ID starting with e8ba3522af6a9edf7ea36415a9ce61bd4e6daf8766d9510d0b02847715256d0e not found: ID does not exist" Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.723229 4947 scope.go:117] "RemoveContainer" containerID="24e22aa04c846740854e7340e2b14eef4110af3d206baaf96197adb243fb1870" Jan 25 00:21:30 crc kubenswrapper[4947]: E0125 00:21:30.723444 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24e22aa04c846740854e7340e2b14eef4110af3d206baaf96197adb243fb1870\": container with ID starting with 24e22aa04c846740854e7340e2b14eef4110af3d206baaf96197adb243fb1870 not found: ID does not exist" containerID="24e22aa04c846740854e7340e2b14eef4110af3d206baaf96197adb243fb1870" Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.723473 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24e22aa04c846740854e7340e2b14eef4110af3d206baaf96197adb243fb1870"} err="failed to get container status \"24e22aa04c846740854e7340e2b14eef4110af3d206baaf96197adb243fb1870\": rpc error: code = NotFound desc = could not find container \"24e22aa04c846740854e7340e2b14eef4110af3d206baaf96197adb243fb1870\": container with ID starting with 24e22aa04c846740854e7340e2b14eef4110af3d206baaf96197adb243fb1870 not found: ID does not exist" Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.723490 4947 scope.go:117] "RemoveContainer" containerID="3afef73f532178d1f4ce7234f366893070291c212faf97513ba9f77386409c97" Jan 25 00:21:30 crc kubenswrapper[4947]: E0125 00:21:30.723736 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3afef73f532178d1f4ce7234f366893070291c212faf97513ba9f77386409c97\": container with ID starting with 3afef73f532178d1f4ce7234f366893070291c212faf97513ba9f77386409c97 not found: ID does not exist" containerID="3afef73f532178d1f4ce7234f366893070291c212faf97513ba9f77386409c97" Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.723761 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3afef73f532178d1f4ce7234f366893070291c212faf97513ba9f77386409c97"} err="failed to get container status \"3afef73f532178d1f4ce7234f366893070291c212faf97513ba9f77386409c97\": rpc error: code = NotFound desc = could not find container \"3afef73f532178d1f4ce7234f366893070291c212faf97513ba9f77386409c97\": container with ID starting with 3afef73f532178d1f4ce7234f366893070291c212faf97513ba9f77386409c97 not found: ID does not exist" Jan 25 00:21:31 crc kubenswrapper[4947]: I0125 00:21:31.096159 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24ae0891-d29f-45fe-be30-a46f76a39dda" path="/var/lib/kubelet/pods/24ae0891-d29f-45fe-be30-a46f76a39dda/volumes" Jan 25 00:21:33 crc kubenswrapper[4947]: I0125 00:21:33.663052 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw"] Jan 25 00:21:33 crc kubenswrapper[4947]: E0125 00:21:33.663655 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24ae0891-d29f-45fe-be30-a46f76a39dda" containerName="extract-utilities" Jan 25 00:21:33 crc kubenswrapper[4947]: I0125 00:21:33.663675 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="24ae0891-d29f-45fe-be30-a46f76a39dda" containerName="extract-utilities" Jan 25 00:21:33 crc kubenswrapper[4947]: E0125 00:21:33.663698 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24ae0891-d29f-45fe-be30-a46f76a39dda" containerName="registry-server" Jan 25 00:21:33 crc kubenswrapper[4947]: I0125 00:21:33.663743 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="24ae0891-d29f-45fe-be30-a46f76a39dda" containerName="registry-server" Jan 25 00:21:33 crc kubenswrapper[4947]: E0125 00:21:33.663770 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24ae0891-d29f-45fe-be30-a46f76a39dda" containerName="extract-content" Jan 25 00:21:33 crc kubenswrapper[4947]: I0125 00:21:33.663785 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="24ae0891-d29f-45fe-be30-a46f76a39dda" containerName="extract-content" Jan 25 00:21:33 crc kubenswrapper[4947]: I0125 00:21:33.663957 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="24ae0891-d29f-45fe-be30-a46f76a39dda" containerName="registry-server" Jan 25 00:21:33 crc kubenswrapper[4947]: I0125 00:21:33.665120 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw" Jan 25 00:21:33 crc kubenswrapper[4947]: I0125 00:21:33.668322 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 25 00:21:33 crc kubenswrapper[4947]: I0125 00:21:33.682608 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw"] Jan 25 00:21:33 crc kubenswrapper[4947]: I0125 00:21:33.805725 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnm6g\" (UniqueName: \"kubernetes.io/projected/e1924f8a-318d-4d3b-ada5-703cf399beed-kube-api-access-jnm6g\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw\" (UID: \"e1924f8a-318d-4d3b-ada5-703cf399beed\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw" Jan 25 00:21:33 crc kubenswrapper[4947]: I0125 00:21:33.805799 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e1924f8a-318d-4d3b-ada5-703cf399beed-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw\" (UID: \"e1924f8a-318d-4d3b-ada5-703cf399beed\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw" Jan 25 00:21:33 crc kubenswrapper[4947]: I0125 00:21:33.805844 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e1924f8a-318d-4d3b-ada5-703cf399beed-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw\" (UID: \"e1924f8a-318d-4d3b-ada5-703cf399beed\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw" Jan 25 00:21:33 crc kubenswrapper[4947]: I0125 00:21:33.906446 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e1924f8a-318d-4d3b-ada5-703cf399beed-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw\" (UID: \"e1924f8a-318d-4d3b-ada5-703cf399beed\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw" Jan 25 00:21:33 crc kubenswrapper[4947]: I0125 00:21:33.906733 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e1924f8a-318d-4d3b-ada5-703cf399beed-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw\" (UID: \"e1924f8a-318d-4d3b-ada5-703cf399beed\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw" Jan 25 00:21:33 crc kubenswrapper[4947]: I0125 00:21:33.906858 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnm6g\" (UniqueName: \"kubernetes.io/projected/e1924f8a-318d-4d3b-ada5-703cf399beed-kube-api-access-jnm6g\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw\" (UID: \"e1924f8a-318d-4d3b-ada5-703cf399beed\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw" Jan 25 00:21:33 crc kubenswrapper[4947]: I0125 00:21:33.907730 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e1924f8a-318d-4d3b-ada5-703cf399beed-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw\" (UID: \"e1924f8a-318d-4d3b-ada5-703cf399beed\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw" Jan 25 00:21:33 crc kubenswrapper[4947]: I0125 00:21:33.907863 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e1924f8a-318d-4d3b-ada5-703cf399beed-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw\" (UID: \"e1924f8a-318d-4d3b-ada5-703cf399beed\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw" Jan 25 00:21:33 crc kubenswrapper[4947]: I0125 00:21:33.944885 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnm6g\" (UniqueName: \"kubernetes.io/projected/e1924f8a-318d-4d3b-ada5-703cf399beed-kube-api-access-jnm6g\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw\" (UID: \"e1924f8a-318d-4d3b-ada5-703cf399beed\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw" Jan 25 00:21:33 crc kubenswrapper[4947]: I0125 00:21:33.986267 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw" Jan 25 00:21:34 crc kubenswrapper[4947]: I0125 00:21:34.143285 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw"] Jan 25 00:21:34 crc kubenswrapper[4947]: I0125 00:21:34.712724 4947 generic.go:334] "Generic (PLEG): container finished" podID="e1924f8a-318d-4d3b-ada5-703cf399beed" containerID="c344d9d9f22b51f2f10814803290d8766c28975c8ea5704fd4717019915221a6" exitCode=0 Jan 25 00:21:34 crc kubenswrapper[4947]: I0125 00:21:34.712867 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw" event={"ID":"e1924f8a-318d-4d3b-ada5-703cf399beed","Type":"ContainerDied","Data":"c344d9d9f22b51f2f10814803290d8766c28975c8ea5704fd4717019915221a6"} Jan 25 00:21:34 crc kubenswrapper[4947]: I0125 00:21:34.713018 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw" event={"ID":"e1924f8a-318d-4d3b-ada5-703cf399beed","Type":"ContainerStarted","Data":"7525f14ae3f0ec04e7b4ff6ddaa5361d71583b1bb8588c944a3b51f09c233b98"} Jan 25 00:21:34 crc kubenswrapper[4947]: I0125 00:21:34.718505 4947 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 25 00:21:36 crc kubenswrapper[4947]: I0125 00:21:36.724965 4947 generic.go:334] "Generic (PLEG): container finished" podID="e1924f8a-318d-4d3b-ada5-703cf399beed" containerID="5d066cdf169265ce54530ce4564da83bbc9d5bccd1ab4f3c3243fde7719295b2" exitCode=0 Jan 25 00:21:36 crc kubenswrapper[4947]: I0125 00:21:36.725183 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw" event={"ID":"e1924f8a-318d-4d3b-ada5-703cf399beed","Type":"ContainerDied","Data":"5d066cdf169265ce54530ce4564da83bbc9d5bccd1ab4f3c3243fde7719295b2"} Jan 25 00:21:37 crc kubenswrapper[4947]: I0125 00:21:37.735273 4947 generic.go:334] "Generic (PLEG): container finished" podID="e1924f8a-318d-4d3b-ada5-703cf399beed" containerID="f64ba124f04191e2608b533488cc3957072c8532f826bf85a3496fa6b92d6e34" exitCode=0 Jan 25 00:21:37 crc kubenswrapper[4947]: I0125 00:21:37.735323 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw" event={"ID":"e1924f8a-318d-4d3b-ada5-703cf399beed","Type":"ContainerDied","Data":"f64ba124f04191e2608b533488cc3957072c8532f826bf85a3496fa6b92d6e34"} Jan 25 00:21:39 crc kubenswrapper[4947]: I0125 00:21:39.036842 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw" Jan 25 00:21:39 crc kubenswrapper[4947]: I0125 00:21:39.175752 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e1924f8a-318d-4d3b-ada5-703cf399beed-bundle\") pod \"e1924f8a-318d-4d3b-ada5-703cf399beed\" (UID: \"e1924f8a-318d-4d3b-ada5-703cf399beed\") " Jan 25 00:21:39 crc kubenswrapper[4947]: I0125 00:21:39.175870 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e1924f8a-318d-4d3b-ada5-703cf399beed-util\") pod \"e1924f8a-318d-4d3b-ada5-703cf399beed\" (UID: \"e1924f8a-318d-4d3b-ada5-703cf399beed\") " Jan 25 00:21:39 crc kubenswrapper[4947]: I0125 00:21:39.176031 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnm6g\" (UniqueName: \"kubernetes.io/projected/e1924f8a-318d-4d3b-ada5-703cf399beed-kube-api-access-jnm6g\") pod \"e1924f8a-318d-4d3b-ada5-703cf399beed\" (UID: \"e1924f8a-318d-4d3b-ada5-703cf399beed\") " Jan 25 00:21:39 crc kubenswrapper[4947]: I0125 00:21:39.181082 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1924f8a-318d-4d3b-ada5-703cf399beed-bundle" (OuterVolumeSpecName: "bundle") pod "e1924f8a-318d-4d3b-ada5-703cf399beed" (UID: "e1924f8a-318d-4d3b-ada5-703cf399beed"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:21:39 crc kubenswrapper[4947]: I0125 00:21:39.182172 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1924f8a-318d-4d3b-ada5-703cf399beed-kube-api-access-jnm6g" (OuterVolumeSpecName: "kube-api-access-jnm6g") pod "e1924f8a-318d-4d3b-ada5-703cf399beed" (UID: "e1924f8a-318d-4d3b-ada5-703cf399beed"). InnerVolumeSpecName "kube-api-access-jnm6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:21:39 crc kubenswrapper[4947]: I0125 00:21:39.210502 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1924f8a-318d-4d3b-ada5-703cf399beed-util" (OuterVolumeSpecName: "util") pod "e1924f8a-318d-4d3b-ada5-703cf399beed" (UID: "e1924f8a-318d-4d3b-ada5-703cf399beed"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:21:39 crc kubenswrapper[4947]: I0125 00:21:39.277913 4947 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e1924f8a-318d-4d3b-ada5-703cf399beed-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 00:21:39 crc kubenswrapper[4947]: I0125 00:21:39.277969 4947 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e1924f8a-318d-4d3b-ada5-703cf399beed-util\") on node \"crc\" DevicePath \"\"" Jan 25 00:21:39 crc kubenswrapper[4947]: I0125 00:21:39.277990 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnm6g\" (UniqueName: \"kubernetes.io/projected/e1924f8a-318d-4d3b-ada5-703cf399beed-kube-api-access-jnm6g\") on node \"crc\" DevicePath \"\"" Jan 25 00:21:39 crc kubenswrapper[4947]: I0125 00:21:39.750979 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw" event={"ID":"e1924f8a-318d-4d3b-ada5-703cf399beed","Type":"ContainerDied","Data":"7525f14ae3f0ec04e7b4ff6ddaa5361d71583b1bb8588c944a3b51f09c233b98"} Jan 25 00:21:39 crc kubenswrapper[4947]: I0125 00:21:39.751388 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7525f14ae3f0ec04e7b4ff6ddaa5361d71583b1bb8588c944a3b51f09c233b98" Jan 25 00:21:39 crc kubenswrapper[4947]: I0125 00:21:39.751095 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw" Jan 25 00:21:42 crc kubenswrapper[4947]: I0125 00:21:42.659722 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm"] Jan 25 00:21:42 crc kubenswrapper[4947]: E0125 00:21:42.660411 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1924f8a-318d-4d3b-ada5-703cf399beed" containerName="util" Jan 25 00:21:42 crc kubenswrapper[4947]: I0125 00:21:42.660435 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1924f8a-318d-4d3b-ada5-703cf399beed" containerName="util" Jan 25 00:21:42 crc kubenswrapper[4947]: E0125 00:21:42.660460 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1924f8a-318d-4d3b-ada5-703cf399beed" containerName="extract" Jan 25 00:21:42 crc kubenswrapper[4947]: I0125 00:21:42.660475 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1924f8a-318d-4d3b-ada5-703cf399beed" containerName="extract" Jan 25 00:21:42 crc kubenswrapper[4947]: E0125 00:21:42.660499 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1924f8a-318d-4d3b-ada5-703cf399beed" containerName="pull" Jan 25 00:21:42 crc kubenswrapper[4947]: I0125 00:21:42.660513 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1924f8a-318d-4d3b-ada5-703cf399beed" containerName="pull" Jan 25 00:21:42 crc kubenswrapper[4947]: I0125 00:21:42.660694 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1924f8a-318d-4d3b-ada5-703cf399beed" containerName="extract" Jan 25 00:21:42 crc kubenswrapper[4947]: I0125 00:21:42.661944 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm" Jan 25 00:21:42 crc kubenswrapper[4947]: I0125 00:21:42.668569 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 25 00:21:42 crc kubenswrapper[4947]: I0125 00:21:42.678263 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm"] Jan 25 00:21:42 crc kubenswrapper[4947]: I0125 00:21:42.722548 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbsfl\" (UniqueName: \"kubernetes.io/projected/1cfc506e-cb97-4eb4-a967-d5ea940b5ce9-kube-api-access-bbsfl\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm\" (UID: \"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm" Jan 25 00:21:42 crc kubenswrapper[4947]: I0125 00:21:42.722681 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1cfc506e-cb97-4eb4-a967-d5ea940b5ce9-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm\" (UID: \"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm" Jan 25 00:21:42 crc kubenswrapper[4947]: I0125 00:21:42.722767 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1cfc506e-cb97-4eb4-a967-d5ea940b5ce9-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm\" (UID: \"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm" Jan 25 00:21:42 crc kubenswrapper[4947]: I0125 00:21:42.823585 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1cfc506e-cb97-4eb4-a967-d5ea940b5ce9-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm\" (UID: \"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm" Jan 25 00:21:42 crc kubenswrapper[4947]: I0125 00:21:42.823635 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1cfc506e-cb97-4eb4-a967-d5ea940b5ce9-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm\" (UID: \"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm" Jan 25 00:21:42 crc kubenswrapper[4947]: I0125 00:21:42.823701 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbsfl\" (UniqueName: \"kubernetes.io/projected/1cfc506e-cb97-4eb4-a967-d5ea940b5ce9-kube-api-access-bbsfl\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm\" (UID: \"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm" Jan 25 00:21:42 crc kubenswrapper[4947]: I0125 00:21:42.824295 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1cfc506e-cb97-4eb4-a967-d5ea940b5ce9-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm\" (UID: \"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm" Jan 25 00:21:42 crc kubenswrapper[4947]: I0125 00:21:42.824597 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1cfc506e-cb97-4eb4-a967-d5ea940b5ce9-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm\" (UID: \"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm" Jan 25 00:21:42 crc kubenswrapper[4947]: I0125 00:21:42.851382 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbsfl\" (UniqueName: \"kubernetes.io/projected/1cfc506e-cb97-4eb4-a967-d5ea940b5ce9-kube-api-access-bbsfl\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm\" (UID: \"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm" Jan 25 00:21:42 crc kubenswrapper[4947]: I0125 00:21:42.987440 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm" Jan 25 00:21:43 crc kubenswrapper[4947]: I0125 00:21:43.222381 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm"] Jan 25 00:21:43 crc kubenswrapper[4947]: I0125 00:21:43.440639 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk"] Jan 25 00:21:43 crc kubenswrapper[4947]: I0125 00:21:43.442888 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk" Jan 25 00:21:43 crc kubenswrapper[4947]: I0125 00:21:43.451731 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk"] Jan 25 00:21:43 crc kubenswrapper[4947]: I0125 00:21:43.536948 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkzg5\" (UniqueName: \"kubernetes.io/projected/b18bd971-05aa-4366-8829-6d2db0f3a1a0-kube-api-access-vkzg5\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk\" (UID: \"b18bd971-05aa-4366-8829-6d2db0f3a1a0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk" Jan 25 00:21:43 crc kubenswrapper[4947]: I0125 00:21:43.537010 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b18bd971-05aa-4366-8829-6d2db0f3a1a0-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk\" (UID: \"b18bd971-05aa-4366-8829-6d2db0f3a1a0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk" Jan 25 00:21:43 crc kubenswrapper[4947]: I0125 00:21:43.537034 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b18bd971-05aa-4366-8829-6d2db0f3a1a0-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk\" (UID: \"b18bd971-05aa-4366-8829-6d2db0f3a1a0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk" Jan 25 00:21:43 crc kubenswrapper[4947]: I0125 00:21:43.638255 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkzg5\" (UniqueName: \"kubernetes.io/projected/b18bd971-05aa-4366-8829-6d2db0f3a1a0-kube-api-access-vkzg5\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk\" (UID: \"b18bd971-05aa-4366-8829-6d2db0f3a1a0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk" Jan 25 00:21:43 crc kubenswrapper[4947]: I0125 00:21:43.638322 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b18bd971-05aa-4366-8829-6d2db0f3a1a0-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk\" (UID: \"b18bd971-05aa-4366-8829-6d2db0f3a1a0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk" Jan 25 00:21:43 crc kubenswrapper[4947]: I0125 00:21:43.638349 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b18bd971-05aa-4366-8829-6d2db0f3a1a0-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk\" (UID: \"b18bd971-05aa-4366-8829-6d2db0f3a1a0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk" Jan 25 00:21:43 crc kubenswrapper[4947]: I0125 00:21:43.638913 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b18bd971-05aa-4366-8829-6d2db0f3a1a0-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk\" (UID: \"b18bd971-05aa-4366-8829-6d2db0f3a1a0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk" Jan 25 00:21:43 crc kubenswrapper[4947]: I0125 00:21:43.638968 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b18bd971-05aa-4366-8829-6d2db0f3a1a0-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk\" (UID: \"b18bd971-05aa-4366-8829-6d2db0f3a1a0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk" Jan 25 00:21:43 crc kubenswrapper[4947]: I0125 00:21:43.675095 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkzg5\" (UniqueName: \"kubernetes.io/projected/b18bd971-05aa-4366-8829-6d2db0f3a1a0-kube-api-access-vkzg5\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk\" (UID: \"b18bd971-05aa-4366-8829-6d2db0f3a1a0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk" Jan 25 00:21:43 crc kubenswrapper[4947]: I0125 00:21:43.779491 4947 generic.go:334] "Generic (PLEG): container finished" podID="1cfc506e-cb97-4eb4-a967-d5ea940b5ce9" containerID="cb471fdf3dc7b256e2f9a47919cc149690af1eaa2a6e571e917f85b39c462994" exitCode=0 Jan 25 00:21:43 crc kubenswrapper[4947]: I0125 00:21:43.779569 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm" event={"ID":"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9","Type":"ContainerDied","Data":"cb471fdf3dc7b256e2f9a47919cc149690af1eaa2a6e571e917f85b39c462994"} Jan 25 00:21:43 crc kubenswrapper[4947]: I0125 00:21:43.779620 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm" event={"ID":"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9","Type":"ContainerStarted","Data":"0bbe8e18431b933dc4c8dd641ad64cbd85e19def267075f50533a7106fa4d7ce"} Jan 25 00:21:43 crc kubenswrapper[4947]: I0125 00:21:43.883025 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk" Jan 25 00:21:44 crc kubenswrapper[4947]: I0125 00:21:44.140058 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk"] Jan 25 00:21:44 crc kubenswrapper[4947]: W0125 00:21:44.154276 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb18bd971_05aa_4366_8829_6d2db0f3a1a0.slice/crio-630dccbe3637d72159e610ce692a503238fe899827098bd335c6ff36197beb37 WatchSource:0}: Error finding container 630dccbe3637d72159e610ce692a503238fe899827098bd335c6ff36197beb37: Status 404 returned error can't find the container with id 630dccbe3637d72159e610ce692a503238fe899827098bd335c6ff36197beb37 Jan 25 00:21:44 crc kubenswrapper[4947]: I0125 00:21:44.790763 4947 generic.go:334] "Generic (PLEG): container finished" podID="b18bd971-05aa-4366-8829-6d2db0f3a1a0" containerID="1ad1d20bcbaa8c0f65d732dfbced8a2de80d1ca15d3a0b8903918df3d0e62520" exitCode=0 Jan 25 00:21:44 crc kubenswrapper[4947]: I0125 00:21:44.791035 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk" event={"ID":"b18bd971-05aa-4366-8829-6d2db0f3a1a0","Type":"ContainerDied","Data":"1ad1d20bcbaa8c0f65d732dfbced8a2de80d1ca15d3a0b8903918df3d0e62520"} Jan 25 00:21:44 crc kubenswrapper[4947]: I0125 00:21:44.791479 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk" event={"ID":"b18bd971-05aa-4366-8829-6d2db0f3a1a0","Type":"ContainerStarted","Data":"630dccbe3637d72159e610ce692a503238fe899827098bd335c6ff36197beb37"} Jan 25 00:21:45 crc kubenswrapper[4947]: I0125 00:21:45.796859 4947 generic.go:334] "Generic (PLEG): container finished" podID="1cfc506e-cb97-4eb4-a967-d5ea940b5ce9" containerID="44fb5b127bacc5cb90d84be224e5039f0b675250aedeebdd106501dd15a0adcf" exitCode=0 Jan 25 00:21:45 crc kubenswrapper[4947]: I0125 00:21:45.796898 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm" event={"ID":"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9","Type":"ContainerDied","Data":"44fb5b127bacc5cb90d84be224e5039f0b675250aedeebdd106501dd15a0adcf"} Jan 25 00:21:46 crc kubenswrapper[4947]: I0125 00:21:46.804542 4947 generic.go:334] "Generic (PLEG): container finished" podID="1cfc506e-cb97-4eb4-a967-d5ea940b5ce9" containerID="af68368e1bac3ec2c5478bf448f97dc6bb0c2978ab481ae9d20f9945c6208f5b" exitCode=0 Jan 25 00:21:46 crc kubenswrapper[4947]: I0125 00:21:46.804652 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm" event={"ID":"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9","Type":"ContainerDied","Data":"af68368e1bac3ec2c5478bf448f97dc6bb0c2978ab481ae9d20f9945c6208f5b"} Jan 25 00:21:46 crc kubenswrapper[4947]: I0125 00:21:46.806791 4947 generic.go:334] "Generic (PLEG): container finished" podID="b18bd971-05aa-4366-8829-6d2db0f3a1a0" containerID="75619ffeb25c3155dfc3f42378a864e202e00a6989de5b1412dc5e50c007ae6a" exitCode=0 Jan 25 00:21:46 crc kubenswrapper[4947]: I0125 00:21:46.806854 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk" event={"ID":"b18bd971-05aa-4366-8829-6d2db0f3a1a0","Type":"ContainerDied","Data":"75619ffeb25c3155dfc3f42378a864e202e00a6989de5b1412dc5e50c007ae6a"} Jan 25 00:21:47 crc kubenswrapper[4947]: I0125 00:21:47.072740 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:21:47 crc kubenswrapper[4947]: I0125 00:21:47.073099 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:21:47 crc kubenswrapper[4947]: I0125 00:21:47.073171 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:21:47 crc kubenswrapper[4947]: I0125 00:21:47.073811 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f64176709d5607e82072c383022bad7109ba8a5dcfa7f1ccb95f13edf0e8c935"} pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 25 00:21:47 crc kubenswrapper[4947]: I0125 00:21:47.073875 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" containerID="cri-o://f64176709d5607e82072c383022bad7109ba8a5dcfa7f1ccb95f13edf0e8c935" gracePeriod=600 Jan 25 00:21:47 crc kubenswrapper[4947]: I0125 00:21:47.814930 4947 generic.go:334] "Generic (PLEG): container finished" podID="b18bd971-05aa-4366-8829-6d2db0f3a1a0" containerID="9c267f02649b54d162dfe171b89b39fe7c64733394091d4dd29fac4d4acf5c09" exitCode=0 Jan 25 00:21:47 crc kubenswrapper[4947]: I0125 00:21:47.815017 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk" event={"ID":"b18bd971-05aa-4366-8829-6d2db0f3a1a0","Type":"ContainerDied","Data":"9c267f02649b54d162dfe171b89b39fe7c64733394091d4dd29fac4d4acf5c09"} Jan 25 00:21:47 crc kubenswrapper[4947]: I0125 00:21:47.818339 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerID="f64176709d5607e82072c383022bad7109ba8a5dcfa7f1ccb95f13edf0e8c935" exitCode=0 Jan 25 00:21:47 crc kubenswrapper[4947]: I0125 00:21:47.818427 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerDied","Data":"f64176709d5607e82072c383022bad7109ba8a5dcfa7f1ccb95f13edf0e8c935"} Jan 25 00:21:47 crc kubenswrapper[4947]: I0125 00:21:47.818519 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerStarted","Data":"3d54be0f7cb7c92bd5d7a293f8c1b17de01e73a4d2768de922b6d4f49ead1a39"} Jan 25 00:21:47 crc kubenswrapper[4947]: I0125 00:21:47.818552 4947 scope.go:117] "RemoveContainer" containerID="3aad7ba55a48c39182daae557e3aa5f13d9797554883516debfe73624ee84ab3" Jan 25 00:21:48 crc kubenswrapper[4947]: I0125 00:21:48.128671 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm" Jan 25 00:21:48 crc kubenswrapper[4947]: I0125 00:21:48.305727 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1cfc506e-cb97-4eb4-a967-d5ea940b5ce9-util\") pod \"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9\" (UID: \"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9\") " Jan 25 00:21:48 crc kubenswrapper[4947]: I0125 00:21:48.305830 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbsfl\" (UniqueName: \"kubernetes.io/projected/1cfc506e-cb97-4eb4-a967-d5ea940b5ce9-kube-api-access-bbsfl\") pod \"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9\" (UID: \"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9\") " Jan 25 00:21:48 crc kubenswrapper[4947]: I0125 00:21:48.305875 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1cfc506e-cb97-4eb4-a967-d5ea940b5ce9-bundle\") pod \"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9\" (UID: \"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9\") " Jan 25 00:21:48 crc kubenswrapper[4947]: I0125 00:21:48.306836 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cfc506e-cb97-4eb4-a967-d5ea940b5ce9-bundle" (OuterVolumeSpecName: "bundle") pod "1cfc506e-cb97-4eb4-a967-d5ea940b5ce9" (UID: "1cfc506e-cb97-4eb4-a967-d5ea940b5ce9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:21:48 crc kubenswrapper[4947]: I0125 00:21:48.315685 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cfc506e-cb97-4eb4-a967-d5ea940b5ce9-kube-api-access-bbsfl" (OuterVolumeSpecName: "kube-api-access-bbsfl") pod "1cfc506e-cb97-4eb4-a967-d5ea940b5ce9" (UID: "1cfc506e-cb97-4eb4-a967-d5ea940b5ce9"). InnerVolumeSpecName "kube-api-access-bbsfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:21:48 crc kubenswrapper[4947]: I0125 00:21:48.331248 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cfc506e-cb97-4eb4-a967-d5ea940b5ce9-util" (OuterVolumeSpecName: "util") pod "1cfc506e-cb97-4eb4-a967-d5ea940b5ce9" (UID: "1cfc506e-cb97-4eb4-a967-d5ea940b5ce9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:21:48 crc kubenswrapper[4947]: I0125 00:21:48.407408 4947 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1cfc506e-cb97-4eb4-a967-d5ea940b5ce9-util\") on node \"crc\" DevicePath \"\"" Jan 25 00:21:48 crc kubenswrapper[4947]: I0125 00:21:48.407451 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbsfl\" (UniqueName: \"kubernetes.io/projected/1cfc506e-cb97-4eb4-a967-d5ea940b5ce9-kube-api-access-bbsfl\") on node \"crc\" DevicePath \"\"" Jan 25 00:21:48 crc kubenswrapper[4947]: I0125 00:21:48.407462 4947 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1cfc506e-cb97-4eb4-a967-d5ea940b5ce9-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 00:21:48 crc kubenswrapper[4947]: I0125 00:21:48.827923 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm" event={"ID":"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9","Type":"ContainerDied","Data":"0bbe8e18431b933dc4c8dd641ad64cbd85e19def267075f50533a7106fa4d7ce"} Jan 25 00:21:48 crc kubenswrapper[4947]: I0125 00:21:48.827973 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bbe8e18431b933dc4c8dd641ad64cbd85e19def267075f50533a7106fa4d7ce" Jan 25 00:21:48 crc kubenswrapper[4947]: I0125 00:21:48.827933 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.141807 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf"] Jan 25 00:21:49 crc kubenswrapper[4947]: E0125 00:21:49.142299 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cfc506e-cb97-4eb4-a967-d5ea940b5ce9" containerName="extract" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.142312 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cfc506e-cb97-4eb4-a967-d5ea940b5ce9" containerName="extract" Jan 25 00:21:49 crc kubenswrapper[4947]: E0125 00:21:49.142330 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cfc506e-cb97-4eb4-a967-d5ea940b5ce9" containerName="util" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.142335 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cfc506e-cb97-4eb4-a967-d5ea940b5ce9" containerName="util" Jan 25 00:21:49 crc kubenswrapper[4947]: E0125 00:21:49.142343 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cfc506e-cb97-4eb4-a967-d5ea940b5ce9" containerName="pull" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.142349 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cfc506e-cb97-4eb4-a967-d5ea940b5ce9" containerName="pull" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.142438 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cfc506e-cb97-4eb4-a967-d5ea940b5ce9" containerName="extract" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.145406 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.159053 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf"] Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.249421 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.318676 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/373809d6-f72c-4eff-afeb-1fa942bb9e22-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf\" (UID: \"373809d6-f72c-4eff-afeb-1fa942bb9e22\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.318742 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tn47\" (UniqueName: \"kubernetes.io/projected/373809d6-f72c-4eff-afeb-1fa942bb9e22-kube-api-access-4tn47\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf\" (UID: \"373809d6-f72c-4eff-afeb-1fa942bb9e22\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.318769 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/373809d6-f72c-4eff-afeb-1fa942bb9e22-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf\" (UID: \"373809d6-f72c-4eff-afeb-1fa942bb9e22\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.421452 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkzg5\" (UniqueName: \"kubernetes.io/projected/b18bd971-05aa-4366-8829-6d2db0f3a1a0-kube-api-access-vkzg5\") pod \"b18bd971-05aa-4366-8829-6d2db0f3a1a0\" (UID: \"b18bd971-05aa-4366-8829-6d2db0f3a1a0\") " Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.421523 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b18bd971-05aa-4366-8829-6d2db0f3a1a0-util\") pod \"b18bd971-05aa-4366-8829-6d2db0f3a1a0\" (UID: \"b18bd971-05aa-4366-8829-6d2db0f3a1a0\") " Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.421574 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b18bd971-05aa-4366-8829-6d2db0f3a1a0-bundle\") pod \"b18bd971-05aa-4366-8829-6d2db0f3a1a0\" (UID: \"b18bd971-05aa-4366-8829-6d2db0f3a1a0\") " Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.421757 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tn47\" (UniqueName: \"kubernetes.io/projected/373809d6-f72c-4eff-afeb-1fa942bb9e22-kube-api-access-4tn47\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf\" (UID: \"373809d6-f72c-4eff-afeb-1fa942bb9e22\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.421793 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/373809d6-f72c-4eff-afeb-1fa942bb9e22-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf\" (UID: \"373809d6-f72c-4eff-afeb-1fa942bb9e22\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.421845 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/373809d6-f72c-4eff-afeb-1fa942bb9e22-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf\" (UID: \"373809d6-f72c-4eff-afeb-1fa942bb9e22\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.422380 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/373809d6-f72c-4eff-afeb-1fa942bb9e22-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf\" (UID: \"373809d6-f72c-4eff-afeb-1fa942bb9e22\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.423542 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b18bd971-05aa-4366-8829-6d2db0f3a1a0-bundle" (OuterVolumeSpecName: "bundle") pod "b18bd971-05aa-4366-8829-6d2db0f3a1a0" (UID: "b18bd971-05aa-4366-8829-6d2db0f3a1a0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.424117 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/373809d6-f72c-4eff-afeb-1fa942bb9e22-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf\" (UID: \"373809d6-f72c-4eff-afeb-1fa942bb9e22\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.432290 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b18bd971-05aa-4366-8829-6d2db0f3a1a0-kube-api-access-vkzg5" (OuterVolumeSpecName: "kube-api-access-vkzg5") pod "b18bd971-05aa-4366-8829-6d2db0f3a1a0" (UID: "b18bd971-05aa-4366-8829-6d2db0f3a1a0"). InnerVolumeSpecName "kube-api-access-vkzg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.454845 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b18bd971-05aa-4366-8829-6d2db0f3a1a0-util" (OuterVolumeSpecName: "util") pod "b18bd971-05aa-4366-8829-6d2db0f3a1a0" (UID: "b18bd971-05aa-4366-8829-6d2db0f3a1a0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.486818 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tn47\" (UniqueName: \"kubernetes.io/projected/373809d6-f72c-4eff-afeb-1fa942bb9e22-kube-api-access-4tn47\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf\" (UID: \"373809d6-f72c-4eff-afeb-1fa942bb9e22\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.516136 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.522554 4947 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b18bd971-05aa-4366-8829-6d2db0f3a1a0-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.522583 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkzg5\" (UniqueName: \"kubernetes.io/projected/b18bd971-05aa-4366-8829-6d2db0f3a1a0-kube-api-access-vkzg5\") on node \"crc\" DevicePath \"\"" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.522593 4947 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b18bd971-05aa-4366-8829-6d2db0f3a1a0-util\") on node \"crc\" DevicePath \"\"" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.830170 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf"] Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.835329 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk" event={"ID":"b18bd971-05aa-4366-8829-6d2db0f3a1a0","Type":"ContainerDied","Data":"630dccbe3637d72159e610ce692a503238fe899827098bd335c6ff36197beb37"} Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.835362 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="630dccbe3637d72159e610ce692a503238fe899827098bd335c6ff36197beb37" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.835429 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk" Jan 25 00:21:49 crc kubenswrapper[4947]: W0125 00:21:49.845183 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod373809d6_f72c_4eff_afeb_1fa942bb9e22.slice/crio-ade22be7624bdefe70bef3c545d6a1213dc871588b1a8aea09e49b8ce71787cb WatchSource:0}: Error finding container ade22be7624bdefe70bef3c545d6a1213dc871588b1a8aea09e49b8ce71787cb: Status 404 returned error can't find the container with id ade22be7624bdefe70bef3c545d6a1213dc871588b1a8aea09e49b8ce71787cb Jan 25 00:21:50 crc kubenswrapper[4947]: I0125 00:21:50.857538 4947 generic.go:334] "Generic (PLEG): container finished" podID="373809d6-f72c-4eff-afeb-1fa942bb9e22" containerID="25f39926cd905f42a5f17c64be28974859261061e4e417d3c9b8a40d0d2ab729" exitCode=0 Jan 25 00:21:50 crc kubenswrapper[4947]: I0125 00:21:50.857921 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf" event={"ID":"373809d6-f72c-4eff-afeb-1fa942bb9e22","Type":"ContainerDied","Data":"25f39926cd905f42a5f17c64be28974859261061e4e417d3c9b8a40d0d2ab729"} Jan 25 00:21:50 crc kubenswrapper[4947]: I0125 00:21:50.858010 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf" event={"ID":"373809d6-f72c-4eff-afeb-1fa942bb9e22","Type":"ContainerStarted","Data":"ade22be7624bdefe70bef3c545d6a1213dc871588b1a8aea09e49b8ce71787cb"} Jan 25 00:21:51 crc kubenswrapper[4947]: I0125 00:21:51.799012 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rhx28"] Jan 25 00:21:51 crc kubenswrapper[4947]: E0125 00:21:51.799632 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b18bd971-05aa-4366-8829-6d2db0f3a1a0" containerName="pull" Jan 25 00:21:51 crc kubenswrapper[4947]: I0125 00:21:51.799647 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="b18bd971-05aa-4366-8829-6d2db0f3a1a0" containerName="pull" Jan 25 00:21:51 crc kubenswrapper[4947]: E0125 00:21:51.799657 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b18bd971-05aa-4366-8829-6d2db0f3a1a0" containerName="util" Jan 25 00:21:51 crc kubenswrapper[4947]: I0125 00:21:51.799663 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="b18bd971-05aa-4366-8829-6d2db0f3a1a0" containerName="util" Jan 25 00:21:51 crc kubenswrapper[4947]: E0125 00:21:51.799671 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b18bd971-05aa-4366-8829-6d2db0f3a1a0" containerName="extract" Jan 25 00:21:51 crc kubenswrapper[4947]: I0125 00:21:51.799680 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="b18bd971-05aa-4366-8829-6d2db0f3a1a0" containerName="extract" Jan 25 00:21:51 crc kubenswrapper[4947]: I0125 00:21:51.799786 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="b18bd971-05aa-4366-8829-6d2db0f3a1a0" containerName="extract" Jan 25 00:21:51 crc kubenswrapper[4947]: I0125 00:21:51.800484 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhx28" Jan 25 00:21:51 crc kubenswrapper[4947]: I0125 00:21:51.845581 4947 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 25 00:21:51 crc kubenswrapper[4947]: I0125 00:21:51.878744 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rhx28"] Jan 25 00:21:51 crc kubenswrapper[4947]: I0125 00:21:51.952764 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkh5r\" (UniqueName: \"kubernetes.io/projected/da41a595-7e83-406d-b782-de0adf6e3d8d-kube-api-access-gkh5r\") pod \"certified-operators-rhx28\" (UID: \"da41a595-7e83-406d-b782-de0adf6e3d8d\") " pod="openshift-marketplace/certified-operators-rhx28" Jan 25 00:21:51 crc kubenswrapper[4947]: I0125 00:21:51.952852 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da41a595-7e83-406d-b782-de0adf6e3d8d-utilities\") pod \"certified-operators-rhx28\" (UID: \"da41a595-7e83-406d-b782-de0adf6e3d8d\") " pod="openshift-marketplace/certified-operators-rhx28" Jan 25 00:21:51 crc kubenswrapper[4947]: I0125 00:21:51.952878 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da41a595-7e83-406d-b782-de0adf6e3d8d-catalog-content\") pod \"certified-operators-rhx28\" (UID: \"da41a595-7e83-406d-b782-de0adf6e3d8d\") " pod="openshift-marketplace/certified-operators-rhx28" Jan 25 00:21:51 crc kubenswrapper[4947]: I0125 00:21:51.996013 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hpxfq"] Jan 25 00:21:51 crc kubenswrapper[4947]: I0125 00:21:51.996946 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hpxfq" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.010668 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hpxfq"] Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.056715 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkh5r\" (UniqueName: \"kubernetes.io/projected/da41a595-7e83-406d-b782-de0adf6e3d8d-kube-api-access-gkh5r\") pod \"certified-operators-rhx28\" (UID: \"da41a595-7e83-406d-b782-de0adf6e3d8d\") " pod="openshift-marketplace/certified-operators-rhx28" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.056838 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da41a595-7e83-406d-b782-de0adf6e3d8d-utilities\") pod \"certified-operators-rhx28\" (UID: \"da41a595-7e83-406d-b782-de0adf6e3d8d\") " pod="openshift-marketplace/certified-operators-rhx28" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.056871 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da41a595-7e83-406d-b782-de0adf6e3d8d-catalog-content\") pod \"certified-operators-rhx28\" (UID: \"da41a595-7e83-406d-b782-de0adf6e3d8d\") " pod="openshift-marketplace/certified-operators-rhx28" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.057404 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da41a595-7e83-406d-b782-de0adf6e3d8d-catalog-content\") pod \"certified-operators-rhx28\" (UID: \"da41a595-7e83-406d-b782-de0adf6e3d8d\") " pod="openshift-marketplace/certified-operators-rhx28" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.057463 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da41a595-7e83-406d-b782-de0adf6e3d8d-utilities\") pod \"certified-operators-rhx28\" (UID: \"da41a595-7e83-406d-b782-de0adf6e3d8d\") " pod="openshift-marketplace/certified-operators-rhx28" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.078275 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkh5r\" (UniqueName: \"kubernetes.io/projected/da41a595-7e83-406d-b782-de0adf6e3d8d-kube-api-access-gkh5r\") pod \"certified-operators-rhx28\" (UID: \"da41a595-7e83-406d-b782-de0adf6e3d8d\") " pod="openshift-marketplace/certified-operators-rhx28" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.116665 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhx28" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.158168 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bmsn\" (UniqueName: \"kubernetes.io/projected/c8e8b07d-e9e8-4efd-a05d-f09f78abca00-kube-api-access-9bmsn\") pod \"redhat-operators-hpxfq\" (UID: \"c8e8b07d-e9e8-4efd-a05d-f09f78abca00\") " pod="openshift-marketplace/redhat-operators-hpxfq" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.158211 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e8b07d-e9e8-4efd-a05d-f09f78abca00-utilities\") pod \"redhat-operators-hpxfq\" (UID: \"c8e8b07d-e9e8-4efd-a05d-f09f78abca00\") " pod="openshift-marketplace/redhat-operators-hpxfq" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.158233 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e8b07d-e9e8-4efd-a05d-f09f78abca00-catalog-content\") pod \"redhat-operators-hpxfq\" (UID: \"c8e8b07d-e9e8-4efd-a05d-f09f78abca00\") " pod="openshift-marketplace/redhat-operators-hpxfq" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.259322 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bmsn\" (UniqueName: \"kubernetes.io/projected/c8e8b07d-e9e8-4efd-a05d-f09f78abca00-kube-api-access-9bmsn\") pod \"redhat-operators-hpxfq\" (UID: \"c8e8b07d-e9e8-4efd-a05d-f09f78abca00\") " pod="openshift-marketplace/redhat-operators-hpxfq" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.259367 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e8b07d-e9e8-4efd-a05d-f09f78abca00-utilities\") pod \"redhat-operators-hpxfq\" (UID: \"c8e8b07d-e9e8-4efd-a05d-f09f78abca00\") " pod="openshift-marketplace/redhat-operators-hpxfq" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.259399 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e8b07d-e9e8-4efd-a05d-f09f78abca00-catalog-content\") pod \"redhat-operators-hpxfq\" (UID: \"c8e8b07d-e9e8-4efd-a05d-f09f78abca00\") " pod="openshift-marketplace/redhat-operators-hpxfq" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.260050 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e8b07d-e9e8-4efd-a05d-f09f78abca00-utilities\") pod \"redhat-operators-hpxfq\" (UID: \"c8e8b07d-e9e8-4efd-a05d-f09f78abca00\") " pod="openshift-marketplace/redhat-operators-hpxfq" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.260064 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e8b07d-e9e8-4efd-a05d-f09f78abca00-catalog-content\") pod \"redhat-operators-hpxfq\" (UID: \"c8e8b07d-e9e8-4efd-a05d-f09f78abca00\") " pod="openshift-marketplace/redhat-operators-hpxfq" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.283797 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bmsn\" (UniqueName: \"kubernetes.io/projected/c8e8b07d-e9e8-4efd-a05d-f09f78abca00-kube-api-access-9bmsn\") pod \"redhat-operators-hpxfq\" (UID: \"c8e8b07d-e9e8-4efd-a05d-f09f78abca00\") " pod="openshift-marketplace/redhat-operators-hpxfq" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.315307 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hpxfq" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.423715 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-wjw4s"] Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.424488 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-wjw4s" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.429776 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.429855 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-gwlqr" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.430054 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.447850 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-wjw4s"] Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.563289 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfrpk\" (UniqueName: \"kubernetes.io/projected/3e662e75-c8ba-4da8-856f-9fc73a2316aa-kube-api-access-cfrpk\") pod \"obo-prometheus-operator-68bc856cb9-wjw4s\" (UID: \"3e662e75-c8ba-4da8-856f-9fc73a2316aa\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-wjw4s" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.566653 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rhx28"] Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.571415 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx"] Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.572099 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.575779 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.576068 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-dslsv" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.588545 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k"] Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.593649 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx"] Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.593805 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k" Jan 25 00:21:52 crc kubenswrapper[4947]: W0125 00:21:52.605316 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda41a595_7e83_406d_b782_de0adf6e3d8d.slice/crio-cfa4a1c82eb366c0f3d41e29a345c0087d291286eca35ee7c803be1e30f42277 WatchSource:0}: Error finding container cfa4a1c82eb366c0f3d41e29a345c0087d291286eca35ee7c803be1e30f42277: Status 404 returned error can't find the container with id cfa4a1c82eb366c0f3d41e29a345c0087d291286eca35ee7c803be1e30f42277 Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.625448 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k"] Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.634365 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hpxfq"] Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.667333 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a3860bf6-f86b-4206-a225-6fa61372a988-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx\" (UID: \"a3860bf6-f86b-4206-a225-6fa61372a988\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.667401 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a3860bf6-f86b-4206-a225-6fa61372a988-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx\" (UID: \"a3860bf6-f86b-4206-a225-6fa61372a988\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.667449 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfrpk\" (UniqueName: \"kubernetes.io/projected/3e662e75-c8ba-4da8-856f-9fc73a2316aa-kube-api-access-cfrpk\") pod \"obo-prometheus-operator-68bc856cb9-wjw4s\" (UID: \"3e662e75-c8ba-4da8-856f-9fc73a2316aa\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-wjw4s" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.689710 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfrpk\" (UniqueName: \"kubernetes.io/projected/3e662e75-c8ba-4da8-856f-9fc73a2316aa-kube-api-access-cfrpk\") pod \"obo-prometheus-operator-68bc856cb9-wjw4s\" (UID: \"3e662e75-c8ba-4da8-856f-9fc73a2316aa\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-wjw4s" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.753832 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-wjw4s" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.758322 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-4v5sm"] Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.758966 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-4v5sm" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.761823 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.762015 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-6bvcn" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.771949 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ae208ca2-2ac2-4a6a-b88e-127c986f32a5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k\" (UID: \"ae208ca2-2ac2-4a6a-b88e-127c986f32a5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.772036 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ae208ca2-2ac2-4a6a-b88e-127c986f32a5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k\" (UID: \"ae208ca2-2ac2-4a6a-b88e-127c986f32a5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.772087 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a3860bf6-f86b-4206-a225-6fa61372a988-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx\" (UID: \"a3860bf6-f86b-4206-a225-6fa61372a988\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.772117 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a3860bf6-f86b-4206-a225-6fa61372a988-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx\" (UID: \"a3860bf6-f86b-4206-a225-6fa61372a988\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.776515 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a3860bf6-f86b-4206-a225-6fa61372a988-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx\" (UID: \"a3860bf6-f86b-4206-a225-6fa61372a988\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.783242 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a3860bf6-f86b-4206-a225-6fa61372a988-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx\" (UID: \"a3860bf6-f86b-4206-a225-6fa61372a988\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.787414 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-4v5sm"] Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.873065 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ae208ca2-2ac2-4a6a-b88e-127c986f32a5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k\" (UID: \"ae208ca2-2ac2-4a6a-b88e-127c986f32a5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.873618 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/9d3adf01-5529-4edb-9b7f-f3c782156a8d-observability-operator-tls\") pod \"observability-operator-59bdc8b94-4v5sm\" (UID: \"9d3adf01-5529-4edb-9b7f-f3c782156a8d\") " pod="openshift-operators/observability-operator-59bdc8b94-4v5sm" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.873669 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ae208ca2-2ac2-4a6a-b88e-127c986f32a5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k\" (UID: \"ae208ca2-2ac2-4a6a-b88e-127c986f32a5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.873694 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfvkj\" (UniqueName: \"kubernetes.io/projected/9d3adf01-5529-4edb-9b7f-f3c782156a8d-kube-api-access-hfvkj\") pod \"observability-operator-59bdc8b94-4v5sm\" (UID: \"9d3adf01-5529-4edb-9b7f-f3c782156a8d\") " pod="openshift-operators/observability-operator-59bdc8b94-4v5sm" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.879819 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ae208ca2-2ac2-4a6a-b88e-127c986f32a5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k\" (UID: \"ae208ca2-2ac2-4a6a-b88e-127c986f32a5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.879931 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ae208ca2-2ac2-4a6a-b88e-127c986f32a5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k\" (UID: \"ae208ca2-2ac2-4a6a-b88e-127c986f32a5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.883516 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpxfq" event={"ID":"c8e8b07d-e9e8-4efd-a05d-f09f78abca00","Type":"ContainerStarted","Data":"9b30e7b5190fa58e43a00f9e75f36dc5d963193ed7c3423d9800f1d9ccff24a8"} Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.883587 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpxfq" event={"ID":"c8e8b07d-e9e8-4efd-a05d-f09f78abca00","Type":"ContainerStarted","Data":"13b192541fa7589b2466360e6399546425f5d80b3b3a89c1761b0ae60a095da2"} Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.889439 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.889632 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhx28" event={"ID":"da41a595-7e83-406d-b782-de0adf6e3d8d","Type":"ContainerStarted","Data":"318d02fed846a5ed6901b65f31cfc5249873f0176dd5ac1452713156f5ee3ae6"} Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.889760 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhx28" event={"ID":"da41a595-7e83-406d-b782-de0adf6e3d8d","Type":"ContainerStarted","Data":"cfa4a1c82eb366c0f3d41e29a345c0087d291286eca35ee7c803be1e30f42277"} Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.926504 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.975289 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/9d3adf01-5529-4edb-9b7f-f3c782156a8d-observability-operator-tls\") pod \"observability-operator-59bdc8b94-4v5sm\" (UID: \"9d3adf01-5529-4edb-9b7f-f3c782156a8d\") " pod="openshift-operators/observability-operator-59bdc8b94-4v5sm" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.975403 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfvkj\" (UniqueName: \"kubernetes.io/projected/9d3adf01-5529-4edb-9b7f-f3c782156a8d-kube-api-access-hfvkj\") pod \"observability-operator-59bdc8b94-4v5sm\" (UID: \"9d3adf01-5529-4edb-9b7f-f3c782156a8d\") " pod="openshift-operators/observability-operator-59bdc8b94-4v5sm" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.980369 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/9d3adf01-5529-4edb-9b7f-f3c782156a8d-observability-operator-tls\") pod \"observability-operator-59bdc8b94-4v5sm\" (UID: \"9d3adf01-5529-4edb-9b7f-f3c782156a8d\") " pod="openshift-operators/observability-operator-59bdc8b94-4v5sm" Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.001404 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-qz44g"] Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.002953 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-qz44g" Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.009910 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfvkj\" (UniqueName: \"kubernetes.io/projected/9d3adf01-5529-4edb-9b7f-f3c782156a8d-kube-api-access-hfvkj\") pod \"observability-operator-59bdc8b94-4v5sm\" (UID: \"9d3adf01-5529-4edb-9b7f-f3c782156a8d\") " pod="openshift-operators/observability-operator-59bdc8b94-4v5sm" Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.012659 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-h78sp" Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.030579 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-qz44g"] Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.100406 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-4v5sm" Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.177810 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/38944919-0d65-4fdd-b2bd-2780f8e77bde-openshift-service-ca\") pod \"perses-operator-5bf474d74f-qz44g\" (UID: \"38944919-0d65-4fdd-b2bd-2780f8e77bde\") " pod="openshift-operators/perses-operator-5bf474d74f-qz44g" Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.178398 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv2mn\" (UniqueName: \"kubernetes.io/projected/38944919-0d65-4fdd-b2bd-2780f8e77bde-kube-api-access-bv2mn\") pod \"perses-operator-5bf474d74f-qz44g\" (UID: \"38944919-0d65-4fdd-b2bd-2780f8e77bde\") " pod="openshift-operators/perses-operator-5bf474d74f-qz44g" Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.281708 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv2mn\" (UniqueName: \"kubernetes.io/projected/38944919-0d65-4fdd-b2bd-2780f8e77bde-kube-api-access-bv2mn\") pod \"perses-operator-5bf474d74f-qz44g\" (UID: \"38944919-0d65-4fdd-b2bd-2780f8e77bde\") " pod="openshift-operators/perses-operator-5bf474d74f-qz44g" Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.281793 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/38944919-0d65-4fdd-b2bd-2780f8e77bde-openshift-service-ca\") pod \"perses-operator-5bf474d74f-qz44g\" (UID: \"38944919-0d65-4fdd-b2bd-2780f8e77bde\") " pod="openshift-operators/perses-operator-5bf474d74f-qz44g" Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.282779 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/38944919-0d65-4fdd-b2bd-2780f8e77bde-openshift-service-ca\") pod \"perses-operator-5bf474d74f-qz44g\" (UID: \"38944919-0d65-4fdd-b2bd-2780f8e77bde\") " pod="openshift-operators/perses-operator-5bf474d74f-qz44g" Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.326837 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv2mn\" (UniqueName: \"kubernetes.io/projected/38944919-0d65-4fdd-b2bd-2780f8e77bde-kube-api-access-bv2mn\") pod \"perses-operator-5bf474d74f-qz44g\" (UID: \"38944919-0d65-4fdd-b2bd-2780f8e77bde\") " pod="openshift-operators/perses-operator-5bf474d74f-qz44g" Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.335063 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-wjw4s"] Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.341453 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-qz44g" Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.584624 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx"] Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.620898 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k"] Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.627160 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-4v5sm"] Jan 25 00:21:53 crc kubenswrapper[4947]: W0125 00:21:53.644679 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae208ca2_2ac2_4a6a_b88e_127c986f32a5.slice/crio-99fd7611954c55918f6cf83a4fa39bcb6a224762ca69c3f597655aea7af46ef1 WatchSource:0}: Error finding container 99fd7611954c55918f6cf83a4fa39bcb6a224762ca69c3f597655aea7af46ef1: Status 404 returned error can't find the container with id 99fd7611954c55918f6cf83a4fa39bcb6a224762ca69c3f597655aea7af46ef1 Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.902769 4947 generic.go:334] "Generic (PLEG): container finished" podID="da41a595-7e83-406d-b782-de0adf6e3d8d" containerID="318d02fed846a5ed6901b65f31cfc5249873f0176dd5ac1452713156f5ee3ae6" exitCode=0 Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.902880 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhx28" event={"ID":"da41a595-7e83-406d-b782-de0adf6e3d8d","Type":"ContainerDied","Data":"318d02fed846a5ed6901b65f31cfc5249873f0176dd5ac1452713156f5ee3ae6"} Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.910839 4947 generic.go:334] "Generic (PLEG): container finished" podID="c8e8b07d-e9e8-4efd-a05d-f09f78abca00" containerID="9b30e7b5190fa58e43a00f9e75f36dc5d963193ed7c3423d9800f1d9ccff24a8" exitCode=0 Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.910928 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpxfq" event={"ID":"c8e8b07d-e9e8-4efd-a05d-f09f78abca00","Type":"ContainerDied","Data":"9b30e7b5190fa58e43a00f9e75f36dc5d963193ed7c3423d9800f1d9ccff24a8"} Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.912488 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-wjw4s" event={"ID":"3e662e75-c8ba-4da8-856f-9fc73a2316aa","Type":"ContainerStarted","Data":"925ec9a913eb4eacc0aa9ece9d9bc08723ee8e783934679ef2edbd5c06097b00"} Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.914653 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-4v5sm" event={"ID":"9d3adf01-5529-4edb-9b7f-f3c782156a8d","Type":"ContainerStarted","Data":"270ad2e21b988d83ce4665713f47d6182fd02f996f87f0310d8cff5873ce78f3"} Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.915673 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k" event={"ID":"ae208ca2-2ac2-4a6a-b88e-127c986f32a5","Type":"ContainerStarted","Data":"99fd7611954c55918f6cf83a4fa39bcb6a224762ca69c3f597655aea7af46ef1"} Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.916904 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx" event={"ID":"a3860bf6-f86b-4206-a225-6fa61372a988","Type":"ContainerStarted","Data":"3e2642cfd0bcb420bbae946fcdc8882c134f65e86f73da36a1f1659b02673f57"} Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.977407 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-qz44g"] Jan 25 00:21:53 crc kubenswrapper[4947]: W0125 00:21:53.988785 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38944919_0d65_4fdd_b2bd_2780f8e77bde.slice/crio-288207986ef4335c4bcc98b5c6c8a5fe2193c346c14f3d2efb886954ecf93d76 WatchSource:0}: Error finding container 288207986ef4335c4bcc98b5c6c8a5fe2193c346c14f3d2efb886954ecf93d76: Status 404 returned error can't find the container with id 288207986ef4335c4bcc98b5c6c8a5fe2193c346c14f3d2efb886954ecf93d76 Jan 25 00:21:54 crc kubenswrapper[4947]: I0125 00:21:54.925835 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-qz44g" event={"ID":"38944919-0d65-4fdd-b2bd-2780f8e77bde","Type":"ContainerStarted","Data":"288207986ef4335c4bcc98b5c6c8a5fe2193c346c14f3d2efb886954ecf93d76"} Jan 25 00:22:00 crc kubenswrapper[4947]: I0125 00:22:00.789201 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-54ddbf459f-pm6cr"] Jan 25 00:22:00 crc kubenswrapper[4947]: I0125 00:22:00.790304 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-54ddbf459f-pm6cr" Jan 25 00:22:00 crc kubenswrapper[4947]: I0125 00:22:00.791893 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-dockercfg-65qws" Jan 25 00:22:00 crc kubenswrapper[4947]: I0125 00:22:00.792447 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-service-cert" Jan 25 00:22:00 crc kubenswrapper[4947]: I0125 00:22:00.792992 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"kube-root-ca.crt" Jan 25 00:22:00 crc kubenswrapper[4947]: I0125 00:22:00.793253 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"openshift-service-ca.crt" Jan 25 00:22:00 crc kubenswrapper[4947]: I0125 00:22:00.825063 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-54ddbf459f-pm6cr"] Jan 25 00:22:00 crc kubenswrapper[4947]: I0125 00:22:00.921778 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v28tm\" (UniqueName: \"kubernetes.io/projected/2f1a951a-1385-42b0-acf1-a549b0edb031-kube-api-access-v28tm\") pod \"elastic-operator-54ddbf459f-pm6cr\" (UID: \"2f1a951a-1385-42b0-acf1-a549b0edb031\") " pod="service-telemetry/elastic-operator-54ddbf459f-pm6cr" Jan 25 00:22:00 crc kubenswrapper[4947]: I0125 00:22:00.922354 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2f1a951a-1385-42b0-acf1-a549b0edb031-webhook-cert\") pod \"elastic-operator-54ddbf459f-pm6cr\" (UID: \"2f1a951a-1385-42b0-acf1-a549b0edb031\") " pod="service-telemetry/elastic-operator-54ddbf459f-pm6cr" Jan 25 00:22:00 crc kubenswrapper[4947]: I0125 00:22:00.922394 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2f1a951a-1385-42b0-acf1-a549b0edb031-apiservice-cert\") pod \"elastic-operator-54ddbf459f-pm6cr\" (UID: \"2f1a951a-1385-42b0-acf1-a549b0edb031\") " pod="service-telemetry/elastic-operator-54ddbf459f-pm6cr" Jan 25 00:22:01 crc kubenswrapper[4947]: I0125 00:22:01.023058 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2f1a951a-1385-42b0-acf1-a549b0edb031-apiservice-cert\") pod \"elastic-operator-54ddbf459f-pm6cr\" (UID: \"2f1a951a-1385-42b0-acf1-a549b0edb031\") " pod="service-telemetry/elastic-operator-54ddbf459f-pm6cr" Jan 25 00:22:01 crc kubenswrapper[4947]: I0125 00:22:01.023185 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v28tm\" (UniqueName: \"kubernetes.io/projected/2f1a951a-1385-42b0-acf1-a549b0edb031-kube-api-access-v28tm\") pod \"elastic-operator-54ddbf459f-pm6cr\" (UID: \"2f1a951a-1385-42b0-acf1-a549b0edb031\") " pod="service-telemetry/elastic-operator-54ddbf459f-pm6cr" Jan 25 00:22:01 crc kubenswrapper[4947]: I0125 00:22:01.023256 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2f1a951a-1385-42b0-acf1-a549b0edb031-webhook-cert\") pod \"elastic-operator-54ddbf459f-pm6cr\" (UID: \"2f1a951a-1385-42b0-acf1-a549b0edb031\") " pod="service-telemetry/elastic-operator-54ddbf459f-pm6cr" Jan 25 00:22:01 crc kubenswrapper[4947]: I0125 00:22:01.035998 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2f1a951a-1385-42b0-acf1-a549b0edb031-webhook-cert\") pod \"elastic-operator-54ddbf459f-pm6cr\" (UID: \"2f1a951a-1385-42b0-acf1-a549b0edb031\") " pod="service-telemetry/elastic-operator-54ddbf459f-pm6cr" Jan 25 00:22:01 crc kubenswrapper[4947]: I0125 00:22:01.040945 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2f1a951a-1385-42b0-acf1-a549b0edb031-apiservice-cert\") pod \"elastic-operator-54ddbf459f-pm6cr\" (UID: \"2f1a951a-1385-42b0-acf1-a549b0edb031\") " pod="service-telemetry/elastic-operator-54ddbf459f-pm6cr" Jan 25 00:22:01 crc kubenswrapper[4947]: I0125 00:22:01.049112 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v28tm\" (UniqueName: \"kubernetes.io/projected/2f1a951a-1385-42b0-acf1-a549b0edb031-kube-api-access-v28tm\") pod \"elastic-operator-54ddbf459f-pm6cr\" (UID: \"2f1a951a-1385-42b0-acf1-a549b0edb031\") " pod="service-telemetry/elastic-operator-54ddbf459f-pm6cr" Jan 25 00:22:01 crc kubenswrapper[4947]: I0125 00:22:01.122331 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-54ddbf459f-pm6cr" Jan 25 00:22:05 crc kubenswrapper[4947]: I0125 00:22:05.510267 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-2zk49"] Jan 25 00:22:05 crc kubenswrapper[4947]: I0125 00:22:05.511657 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-2zk49"] Jan 25 00:22:05 crc kubenswrapper[4947]: I0125 00:22:05.511784 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-2zk49" Jan 25 00:22:05 crc kubenswrapper[4947]: I0125 00:22:05.517729 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"interconnect-operator-dockercfg-nwncq" Jan 25 00:22:05 crc kubenswrapper[4947]: I0125 00:22:05.591723 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx6qn\" (UniqueName: \"kubernetes.io/projected/fb65215d-4c8c-4191-a224-f49ec8acfaa0-kube-api-access-sx6qn\") pod \"interconnect-operator-5bb49f789d-2zk49\" (UID: \"fb65215d-4c8c-4191-a224-f49ec8acfaa0\") " pod="service-telemetry/interconnect-operator-5bb49f789d-2zk49" Jan 25 00:22:05 crc kubenswrapper[4947]: I0125 00:22:05.693170 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx6qn\" (UniqueName: \"kubernetes.io/projected/fb65215d-4c8c-4191-a224-f49ec8acfaa0-kube-api-access-sx6qn\") pod \"interconnect-operator-5bb49f789d-2zk49\" (UID: \"fb65215d-4c8c-4191-a224-f49ec8acfaa0\") " pod="service-telemetry/interconnect-operator-5bb49f789d-2zk49" Jan 25 00:22:05 crc kubenswrapper[4947]: I0125 00:22:05.713993 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx6qn\" (UniqueName: \"kubernetes.io/projected/fb65215d-4c8c-4191-a224-f49ec8acfaa0-kube-api-access-sx6qn\") pod \"interconnect-operator-5bb49f789d-2zk49\" (UID: \"fb65215d-4c8c-4191-a224-f49ec8acfaa0\") " pod="service-telemetry/interconnect-operator-5bb49f789d-2zk49" Jan 25 00:22:05 crc kubenswrapper[4947]: I0125 00:22:05.830559 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-2zk49" Jan 25 00:22:08 crc kubenswrapper[4947]: W0125 00:22:08.769499 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f1a951a_1385_42b0_acf1_a549b0edb031.slice/crio-8c94f54fe23f2f506042b577aa7899f4949a9458787409a1007b64d0df33bcf4 WatchSource:0}: Error finding container 8c94f54fe23f2f506042b577aa7899f4949a9458787409a1007b64d0df33bcf4: Status 404 returned error can't find the container with id 8c94f54fe23f2f506042b577aa7899f4949a9458787409a1007b64d0df33bcf4 Jan 25 00:22:08 crc kubenswrapper[4947]: I0125 00:22:08.772653 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-54ddbf459f-pm6cr"] Jan 25 00:22:08 crc kubenswrapper[4947]: I0125 00:22:08.860075 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-2zk49"] Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.095468 4947 generic.go:334] "Generic (PLEG): container finished" podID="373809d6-f72c-4eff-afeb-1fa942bb9e22" containerID="fe4a1fe6e51e5ad30cdc321eb4c773de634c37178f521555c52df0019e2fd1ff" exitCode=0 Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.095522 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf" event={"ID":"373809d6-f72c-4eff-afeb-1fa942bb9e22","Type":"ContainerDied","Data":"fe4a1fe6e51e5ad30cdc321eb4c773de634c37178f521555c52df0019e2fd1ff"} Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.098885 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpxfq" event={"ID":"c8e8b07d-e9e8-4efd-a05d-f09f78abca00","Type":"ContainerStarted","Data":"0ad61d18978a538cad1724a2be6e708c3afaaaafc7b49772ebcaa70bca997dc0"} Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.100423 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-wjw4s" event={"ID":"3e662e75-c8ba-4da8-856f-9fc73a2316aa","Type":"ContainerStarted","Data":"551ef2892572eb4871537778a088d8934ad6b5f1ae4156f9caaae98ce5af56f6"} Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.101408 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-2zk49" event={"ID":"fb65215d-4c8c-4191-a224-f49ec8acfaa0","Type":"ContainerStarted","Data":"5b754e698f956dc3697fd48452f47a9632788ced80623b83bd46f8d26cbb652e"} Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.102409 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-4v5sm" event={"ID":"9d3adf01-5529-4edb-9b7f-f3c782156a8d","Type":"ContainerStarted","Data":"beaaecb1c8b32812f301fa8e5e9b50d12af967114f2fc0893f9320e72f2d246a"} Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.102999 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-4v5sm" Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.105394 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-4v5sm" Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.106152 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx" event={"ID":"a3860bf6-f86b-4206-a225-6fa61372a988","Type":"ContainerStarted","Data":"02ed1a0c720a38e02733355038992aba24c25bc7af33d943f97286d0c4533a02"} Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.107938 4947 generic.go:334] "Generic (PLEG): container finished" podID="da41a595-7e83-406d-b782-de0adf6e3d8d" containerID="6cbb445a3f80a4c27484f46896be1c01f1f1240b378c14e2be68c255ab07f7b9" exitCode=0 Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.108008 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhx28" event={"ID":"da41a595-7e83-406d-b782-de0adf6e3d8d","Type":"ContainerDied","Data":"6cbb445a3f80a4c27484f46896be1c01f1f1240b378c14e2be68c255ab07f7b9"} Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.111314 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-qz44g" event={"ID":"38944919-0d65-4fdd-b2bd-2780f8e77bde","Type":"ContainerStarted","Data":"89dc773416d0f3b62a79df4b3cf5746323125f6fafdefd10816de98b604d5c61"} Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.111511 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-qz44g" Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.119871 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-54ddbf459f-pm6cr" event={"ID":"2f1a951a-1385-42b0-acf1-a549b0edb031","Type":"ContainerStarted","Data":"8c94f54fe23f2f506042b577aa7899f4949a9458787409a1007b64d0df33bcf4"} Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.122639 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k" event={"ID":"ae208ca2-2ac2-4a6a-b88e-127c986f32a5","Type":"ContainerStarted","Data":"2096cc33af18c7db5f1d812dd28fbdd42b45054ba2aadbb06368654272887f60"} Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.176496 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-wjw4s" podStartSLOduration=2.308180846 podStartE2EDuration="17.176474428s" podCreationTimestamp="2026-01-25 00:21:52 +0000 UTC" firstStartedPulling="2026-01-25 00:21:53.398034483 +0000 UTC m=+752.631024923" lastFinishedPulling="2026-01-25 00:22:08.266328055 +0000 UTC m=+767.499318505" observedRunningTime="2026-01-25 00:22:09.174706034 +0000 UTC m=+768.407696474" watchObservedRunningTime="2026-01-25 00:22:09.176474428 +0000 UTC m=+768.409464878" Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.218221 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k" podStartSLOduration=2.5905154120000002 podStartE2EDuration="17.218201479s" podCreationTimestamp="2026-01-25 00:21:52 +0000 UTC" firstStartedPulling="2026-01-25 00:21:53.656059935 +0000 UTC m=+752.889050375" lastFinishedPulling="2026-01-25 00:22:08.283745982 +0000 UTC m=+767.516736442" observedRunningTime="2026-01-25 00:22:09.213452635 +0000 UTC m=+768.446443075" watchObservedRunningTime="2026-01-25 00:22:09.218201479 +0000 UTC m=+768.451191919" Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.288879 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-qz44g" podStartSLOduration=3.040099591 podStartE2EDuration="17.288862075s" podCreationTimestamp="2026-01-25 00:21:52 +0000 UTC" firstStartedPulling="2026-01-25 00:21:53.991924485 +0000 UTC m=+753.224914925" lastFinishedPulling="2026-01-25 00:22:08.240686959 +0000 UTC m=+767.473677409" observedRunningTime="2026-01-25 00:22:09.28659011 +0000 UTC m=+768.519580550" watchObservedRunningTime="2026-01-25 00:22:09.288862075 +0000 UTC m=+768.521852515" Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.307525 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-4v5sm" podStartSLOduration=2.723992075 podStartE2EDuration="17.307509782s" podCreationTimestamp="2026-01-25 00:21:52 +0000 UTC" firstStartedPulling="2026-01-25 00:21:53.682235603 +0000 UTC m=+752.915226043" lastFinishedPulling="2026-01-25 00:22:08.26575326 +0000 UTC m=+767.498743750" observedRunningTime="2026-01-25 00:22:09.30530729 +0000 UTC m=+768.538297730" watchObservedRunningTime="2026-01-25 00:22:09.307509782 +0000 UTC m=+768.540500222" Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.364827 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx" podStartSLOduration=2.75461781 podStartE2EDuration="17.364806987s" podCreationTimestamp="2026-01-25 00:21:52 +0000 UTC" firstStartedPulling="2026-01-25 00:21:53.655674786 +0000 UTC m=+752.888665226" lastFinishedPulling="2026-01-25 00:22:08.265863953 +0000 UTC m=+767.498854403" observedRunningTime="2026-01-25 00:22:09.361516328 +0000 UTC m=+768.594506768" watchObservedRunningTime="2026-01-25 00:22:09.364806987 +0000 UTC m=+768.597797427" Jan 25 00:22:10 crc kubenswrapper[4947]: I0125 00:22:10.133592 4947 generic.go:334] "Generic (PLEG): container finished" podID="c8e8b07d-e9e8-4efd-a05d-f09f78abca00" containerID="0ad61d18978a538cad1724a2be6e708c3afaaaafc7b49772ebcaa70bca997dc0" exitCode=0 Jan 25 00:22:10 crc kubenswrapper[4947]: I0125 00:22:10.133905 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpxfq" event={"ID":"c8e8b07d-e9e8-4efd-a05d-f09f78abca00","Type":"ContainerDied","Data":"0ad61d18978a538cad1724a2be6e708c3afaaaafc7b49772ebcaa70bca997dc0"} Jan 25 00:22:10 crc kubenswrapper[4947]: I0125 00:22:10.139619 4947 generic.go:334] "Generic (PLEG): container finished" podID="373809d6-f72c-4eff-afeb-1fa942bb9e22" containerID="5308d7d0366e6f92031ccebe149d4fb138248b027eeea8f45d458990d0afa9b4" exitCode=0 Jan 25 00:22:10 crc kubenswrapper[4947]: I0125 00:22:10.139680 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf" event={"ID":"373809d6-f72c-4eff-afeb-1fa942bb9e22","Type":"ContainerDied","Data":"5308d7d0366e6f92031ccebe149d4fb138248b027eeea8f45d458990d0afa9b4"} Jan 25 00:22:10 crc kubenswrapper[4947]: I0125 00:22:10.142407 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhx28" event={"ID":"da41a595-7e83-406d-b782-de0adf6e3d8d","Type":"ContainerStarted","Data":"101a2d9515c68f6c70d7d549acf06d7bac932451d00ba8d176ceee4a6c10a7d2"} Jan 25 00:22:11 crc kubenswrapper[4947]: I0125 00:22:11.125069 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rhx28" podStartSLOduration=4.480411677 podStartE2EDuration="20.12504854s" podCreationTimestamp="2026-01-25 00:21:51 +0000 UTC" firstStartedPulling="2026-01-25 00:21:53.904858356 +0000 UTC m=+753.137848796" lastFinishedPulling="2026-01-25 00:22:09.549495229 +0000 UTC m=+768.782485659" observedRunningTime="2026-01-25 00:22:10.205412881 +0000 UTC m=+769.438403321" watchObservedRunningTime="2026-01-25 00:22:11.12504854 +0000 UTC m=+770.358038980" Jan 25 00:22:12 crc kubenswrapper[4947]: I0125 00:22:12.117151 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rhx28" Jan 25 00:22:12 crc kubenswrapper[4947]: I0125 00:22:12.118249 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rhx28" Jan 25 00:22:12 crc kubenswrapper[4947]: I0125 00:22:12.157484 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf" Jan 25 00:22:12 crc kubenswrapper[4947]: I0125 00:22:12.186008 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf" event={"ID":"373809d6-f72c-4eff-afeb-1fa942bb9e22","Type":"ContainerDied","Data":"ade22be7624bdefe70bef3c545d6a1213dc871588b1a8aea09e49b8ce71787cb"} Jan 25 00:22:12 crc kubenswrapper[4947]: I0125 00:22:12.186072 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ade22be7624bdefe70bef3c545d6a1213dc871588b1a8aea09e49b8ce71787cb" Jan 25 00:22:12 crc kubenswrapper[4947]: I0125 00:22:12.186033 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf" Jan 25 00:22:12 crc kubenswrapper[4947]: I0125 00:22:12.187207 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rhx28" Jan 25 00:22:12 crc kubenswrapper[4947]: I0125 00:22:12.212915 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tn47\" (UniqueName: \"kubernetes.io/projected/373809d6-f72c-4eff-afeb-1fa942bb9e22-kube-api-access-4tn47\") pod \"373809d6-f72c-4eff-afeb-1fa942bb9e22\" (UID: \"373809d6-f72c-4eff-afeb-1fa942bb9e22\") " Jan 25 00:22:12 crc kubenswrapper[4947]: I0125 00:22:12.212996 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/373809d6-f72c-4eff-afeb-1fa942bb9e22-bundle\") pod \"373809d6-f72c-4eff-afeb-1fa942bb9e22\" (UID: \"373809d6-f72c-4eff-afeb-1fa942bb9e22\") " Jan 25 00:22:12 crc kubenswrapper[4947]: I0125 00:22:12.213104 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/373809d6-f72c-4eff-afeb-1fa942bb9e22-util\") pod \"373809d6-f72c-4eff-afeb-1fa942bb9e22\" (UID: \"373809d6-f72c-4eff-afeb-1fa942bb9e22\") " Jan 25 00:22:12 crc kubenswrapper[4947]: I0125 00:22:12.217293 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/373809d6-f72c-4eff-afeb-1fa942bb9e22-bundle" (OuterVolumeSpecName: "bundle") pod "373809d6-f72c-4eff-afeb-1fa942bb9e22" (UID: "373809d6-f72c-4eff-afeb-1fa942bb9e22"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:22:12 crc kubenswrapper[4947]: I0125 00:22:12.229416 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/373809d6-f72c-4eff-afeb-1fa942bb9e22-kube-api-access-4tn47" (OuterVolumeSpecName: "kube-api-access-4tn47") pod "373809d6-f72c-4eff-afeb-1fa942bb9e22" (UID: "373809d6-f72c-4eff-afeb-1fa942bb9e22"). InnerVolumeSpecName "kube-api-access-4tn47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:22:12 crc kubenswrapper[4947]: I0125 00:22:12.247248 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/373809d6-f72c-4eff-afeb-1fa942bb9e22-util" (OuterVolumeSpecName: "util") pod "373809d6-f72c-4eff-afeb-1fa942bb9e22" (UID: "373809d6-f72c-4eff-afeb-1fa942bb9e22"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:22:12 crc kubenswrapper[4947]: I0125 00:22:12.314715 4947 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/373809d6-f72c-4eff-afeb-1fa942bb9e22-util\") on node \"crc\" DevicePath \"\"" Jan 25 00:22:12 crc kubenswrapper[4947]: I0125 00:22:12.314748 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tn47\" (UniqueName: \"kubernetes.io/projected/373809d6-f72c-4eff-afeb-1fa942bb9e22-kube-api-access-4tn47\") on node \"crc\" DevicePath \"\"" Jan 25 00:22:12 crc kubenswrapper[4947]: I0125 00:22:12.314758 4947 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/373809d6-f72c-4eff-afeb-1fa942bb9e22-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.215732 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpxfq" event={"ID":"c8e8b07d-e9e8-4efd-a05d-f09f78abca00","Type":"ContainerStarted","Data":"e4836fdef9e34e6f13ac541e3aaade10488ce09dbcca0800df64b539d7789fdf"} Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.219201 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-54ddbf459f-pm6cr" event={"ID":"2f1a951a-1385-42b0-acf1-a549b0edb031","Type":"ContainerStarted","Data":"372e069faa69d0ce3a9062a84433192115fe716146663a353a70589654e4122c"} Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.270238 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hpxfq" podStartSLOduration=3.995618243 podStartE2EDuration="22.270216983s" podCreationTimestamp="2026-01-25 00:21:51 +0000 UTC" firstStartedPulling="2026-01-25 00:21:53.912832076 +0000 UTC m=+753.145822516" lastFinishedPulling="2026-01-25 00:22:12.187430806 +0000 UTC m=+771.420421256" observedRunningTime="2026-01-25 00:22:13.267593959 +0000 UTC m=+772.500584399" watchObservedRunningTime="2026-01-25 00:22:13.270216983 +0000 UTC m=+772.503207423" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.310623 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-54ddbf459f-pm6cr" podStartSLOduration=9.88540953 podStartE2EDuration="13.310597611s" podCreationTimestamp="2026-01-25 00:22:00 +0000 UTC" firstStartedPulling="2026-01-25 00:22:08.774970341 +0000 UTC m=+768.007960781" lastFinishedPulling="2026-01-25 00:22:12.200158402 +0000 UTC m=+771.433148862" observedRunningTime="2026-01-25 00:22:13.3076575 +0000 UTC m=+772.540647960" watchObservedRunningTime="2026-01-25 00:22:13.310597611 +0000 UTC m=+772.543588051" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.344491 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-qz44g" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.519635 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 25 00:22:13 crc kubenswrapper[4947]: E0125 00:22:13.520042 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="373809d6-f72c-4eff-afeb-1fa942bb9e22" containerName="pull" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.520072 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="373809d6-f72c-4eff-afeb-1fa942bb9e22" containerName="pull" Jan 25 00:22:13 crc kubenswrapper[4947]: E0125 00:22:13.520106 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="373809d6-f72c-4eff-afeb-1fa942bb9e22" containerName="util" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.520117 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="373809d6-f72c-4eff-afeb-1fa942bb9e22" containerName="util" Jan 25 00:22:13 crc kubenswrapper[4947]: E0125 00:22:13.520156 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="373809d6-f72c-4eff-afeb-1fa942bb9e22" containerName="extract" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.520170 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="373809d6-f72c-4eff-afeb-1fa942bb9e22" containerName="extract" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.520291 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="373809d6-f72c-4eff-afeb-1fa942bb9e22" containerName="extract" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.521285 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.531336 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-config" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.531477 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-internal-users" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.531460 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-transport-certs" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.532698 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-dockercfg-hzxwm" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.534380 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-http-certs-internal" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.534437 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-xpack-file-realm" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.542560 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-remote-ca" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.542884 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-scripts" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.543040 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-unicast-hosts" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.550569 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.631347 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.631392 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.631428 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.631447 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.631479 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.631544 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.631636 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.631690 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.631712 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.631782 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.631812 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.631840 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.631856 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.631883 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.631924 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.733117 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.733213 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.733240 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.733266 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.733296 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.733316 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.733372 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.733412 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.733436 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.733460 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.733485 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.733525 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.733549 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.733577 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.733598 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.737641 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.737948 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.738201 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.738415 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.740416 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.740881 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.741698 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.742039 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.742367 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.742386 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.742693 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.742983 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.743185 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.747821 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.747940 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.844457 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:14 crc kubenswrapper[4947]: I0125 00:22:14.281265 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rhx28" Jan 25 00:22:17 crc kubenswrapper[4947]: I0125 00:22:17.184616 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rhx28"] Jan 25 00:22:17 crc kubenswrapper[4947]: I0125 00:22:17.185075 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rhx28" podUID="da41a595-7e83-406d-b782-de0adf6e3d8d" containerName="registry-server" containerID="cri-o://101a2d9515c68f6c70d7d549acf06d7bac932451d00ba8d176ceee4a6c10a7d2" gracePeriod=2 Jan 25 00:22:19 crc kubenswrapper[4947]: I0125 00:22:19.263376 4947 generic.go:334] "Generic (PLEG): container finished" podID="da41a595-7e83-406d-b782-de0adf6e3d8d" containerID="101a2d9515c68f6c70d7d549acf06d7bac932451d00ba8d176ceee4a6c10a7d2" exitCode=0 Jan 25 00:22:19 crc kubenswrapper[4947]: I0125 00:22:19.263420 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhx28" event={"ID":"da41a595-7e83-406d-b782-de0adf6e3d8d","Type":"ContainerDied","Data":"101a2d9515c68f6c70d7d549acf06d7bac932451d00ba8d176ceee4a6c10a7d2"} Jan 25 00:22:22 crc kubenswrapper[4947]: E0125 00:22:22.118592 4947 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 101a2d9515c68f6c70d7d549acf06d7bac932451d00ba8d176ceee4a6c10a7d2 is running failed: container process not found" containerID="101a2d9515c68f6c70d7d549acf06d7bac932451d00ba8d176ceee4a6c10a7d2" cmd=["grpc_health_probe","-addr=:50051"] Jan 25 00:22:22 crc kubenswrapper[4947]: E0125 00:22:22.119176 4947 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 101a2d9515c68f6c70d7d549acf06d7bac932451d00ba8d176ceee4a6c10a7d2 is running failed: container process not found" containerID="101a2d9515c68f6c70d7d549acf06d7bac932451d00ba8d176ceee4a6c10a7d2" cmd=["grpc_health_probe","-addr=:50051"] Jan 25 00:22:22 crc kubenswrapper[4947]: E0125 00:22:22.119487 4947 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 101a2d9515c68f6c70d7d549acf06d7bac932451d00ba8d176ceee4a6c10a7d2 is running failed: container process not found" containerID="101a2d9515c68f6c70d7d549acf06d7bac932451d00ba8d176ceee4a6c10a7d2" cmd=["grpc_health_probe","-addr=:50051"] Jan 25 00:22:22 crc kubenswrapper[4947]: E0125 00:22:22.119517 4947 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 101a2d9515c68f6c70d7d549acf06d7bac932451d00ba8d176ceee4a6c10a7d2 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-rhx28" podUID="da41a595-7e83-406d-b782-de0adf6e3d8d" containerName="registry-server" Jan 25 00:22:22 crc kubenswrapper[4947]: I0125 00:22:22.316951 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hpxfq" Jan 25 00:22:22 crc kubenswrapper[4947]: I0125 00:22:22.317207 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hpxfq" Jan 25 00:22:22 crc kubenswrapper[4947]: I0125 00:22:22.357663 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hpxfq" Jan 25 00:22:23 crc kubenswrapper[4947]: I0125 00:22:23.331968 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hpxfq" Jan 25 00:22:25 crc kubenswrapper[4947]: E0125 00:22:25.768218 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/amq7/amq-interconnect-operator@sha256:a8b621237c872ded2a1d1d948fbebd693429e4a1ced1d7922406241a078d3d43" Jan 25 00:22:25 crc kubenswrapper[4947]: E0125 00:22:25.768534 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:interconnect-operator,Image:registry.redhat.io/amq7/amq-interconnect-operator@sha256:a8b621237c872ded2a1d1d948fbebd693429e4a1ced1d7922406241a078d3d43,Command:[qdr-operator],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:60000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:qdr-operator,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_QDROUTERD_IMAGE,Value:registry.redhat.io/amq7/amq-interconnect@sha256:31d87473fa684178a694f9ee331d3c80f2653f9533cb65c2a325752166a077e9,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:amq7-interconnect-operator.v1.10.20,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sx6qn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod interconnect-operator-5bb49f789d-2zk49_service-telemetry(fb65215d-4c8c-4191-a224-f49ec8acfaa0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 25 00:22:25 crc kubenswrapper[4947]: E0125 00:22:25.769836 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"interconnect-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="service-telemetry/interconnect-operator-5bb49f789d-2zk49" podUID="fb65215d-4c8c-4191-a224-f49ec8acfaa0" Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.033299 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhx28" Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.042020 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 25 00:22:26 crc kubenswrapper[4947]: W0125 00:22:26.048631 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac6cbdf5_f2a1_4e0a_90cb_2d97e1caa9a6.slice/crio-2e7862ec85f25af6665e553c687980f94911a639a9374e04e8d2578a951dd21d WatchSource:0}: Error finding container 2e7862ec85f25af6665e553c687980f94911a639a9374e04e8d2578a951dd21d: Status 404 returned error can't find the container with id 2e7862ec85f25af6665e553c687980f94911a639a9374e04e8d2578a951dd21d Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.175279 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkh5r\" (UniqueName: \"kubernetes.io/projected/da41a595-7e83-406d-b782-de0adf6e3d8d-kube-api-access-gkh5r\") pod \"da41a595-7e83-406d-b782-de0adf6e3d8d\" (UID: \"da41a595-7e83-406d-b782-de0adf6e3d8d\") " Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.175394 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da41a595-7e83-406d-b782-de0adf6e3d8d-catalog-content\") pod \"da41a595-7e83-406d-b782-de0adf6e3d8d\" (UID: \"da41a595-7e83-406d-b782-de0adf6e3d8d\") " Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.175442 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da41a595-7e83-406d-b782-de0adf6e3d8d-utilities\") pod \"da41a595-7e83-406d-b782-de0adf6e3d8d\" (UID: \"da41a595-7e83-406d-b782-de0adf6e3d8d\") " Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.176476 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da41a595-7e83-406d-b782-de0adf6e3d8d-utilities" (OuterVolumeSpecName: "utilities") pod "da41a595-7e83-406d-b782-de0adf6e3d8d" (UID: "da41a595-7e83-406d-b782-de0adf6e3d8d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.196942 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da41a595-7e83-406d-b782-de0adf6e3d8d-kube-api-access-gkh5r" (OuterVolumeSpecName: "kube-api-access-gkh5r") pod "da41a595-7e83-406d-b782-de0adf6e3d8d" (UID: "da41a595-7e83-406d-b782-de0adf6e3d8d"). InnerVolumeSpecName "kube-api-access-gkh5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.218888 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da41a595-7e83-406d-b782-de0adf6e3d8d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da41a595-7e83-406d-b782-de0adf6e3d8d" (UID: "da41a595-7e83-406d-b782-de0adf6e3d8d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.276815 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da41a595-7e83-406d-b782-de0adf6e3d8d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.276850 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da41a595-7e83-406d-b782-de0adf6e3d8d-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.276860 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkh5r\" (UniqueName: \"kubernetes.io/projected/da41a595-7e83-406d-b782-de0adf6e3d8d-kube-api-access-gkh5r\") on node \"crc\" DevicePath \"\"" Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.300407 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6","Type":"ContainerStarted","Data":"2e7862ec85f25af6665e553c687980f94911a639a9374e04e8d2578a951dd21d"} Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.303509 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhx28" event={"ID":"da41a595-7e83-406d-b782-de0adf6e3d8d","Type":"ContainerDied","Data":"cfa4a1c82eb366c0f3d41e29a345c0087d291286eca35ee7c803be1e30f42277"} Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.303553 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhx28" Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.303559 4947 scope.go:117] "RemoveContainer" containerID="101a2d9515c68f6c70d7d549acf06d7bac932451d00ba8d176ceee4a6c10a7d2" Jan 25 00:22:26 crc kubenswrapper[4947]: E0125 00:22:26.305591 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"interconnect-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/amq7/amq-interconnect-operator@sha256:a8b621237c872ded2a1d1d948fbebd693429e4a1ced1d7922406241a078d3d43\\\"\"" pod="service-telemetry/interconnect-operator-5bb49f789d-2zk49" podUID="fb65215d-4c8c-4191-a224-f49ec8acfaa0" Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.347117 4947 scope.go:117] "RemoveContainer" containerID="6cbb445a3f80a4c27484f46896be1c01f1f1240b378c14e2be68c255ab07f7b9" Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.364524 4947 scope.go:117] "RemoveContainer" containerID="318d02fed846a5ed6901b65f31cfc5249873f0176dd5ac1452713156f5ee3ae6" Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.374287 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rhx28"] Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.377215 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rhx28"] Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.585686 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hpxfq"] Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.585973 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hpxfq" podUID="c8e8b07d-e9e8-4efd-a05d-f09f78abca00" containerName="registry-server" containerID="cri-o://e4836fdef9e34e6f13ac541e3aaade10488ce09dbcca0800df64b539d7789fdf" gracePeriod=2 Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.939821 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hpxfq" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.087951 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bmsn\" (UniqueName: \"kubernetes.io/projected/c8e8b07d-e9e8-4efd-a05d-f09f78abca00-kube-api-access-9bmsn\") pod \"c8e8b07d-e9e8-4efd-a05d-f09f78abca00\" (UID: \"c8e8b07d-e9e8-4efd-a05d-f09f78abca00\") " Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.088315 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e8b07d-e9e8-4efd-a05d-f09f78abca00-catalog-content\") pod \"c8e8b07d-e9e8-4efd-a05d-f09f78abca00\" (UID: \"c8e8b07d-e9e8-4efd-a05d-f09f78abca00\") " Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.088471 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e8b07d-e9e8-4efd-a05d-f09f78abca00-utilities\") pod \"c8e8b07d-e9e8-4efd-a05d-f09f78abca00\" (UID: \"c8e8b07d-e9e8-4efd-a05d-f09f78abca00\") " Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.091324 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8e8b07d-e9e8-4efd-a05d-f09f78abca00-utilities" (OuterVolumeSpecName: "utilities") pod "c8e8b07d-e9e8-4efd-a05d-f09f78abca00" (UID: "c8e8b07d-e9e8-4efd-a05d-f09f78abca00"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.093762 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8e8b07d-e9e8-4efd-a05d-f09f78abca00-kube-api-access-9bmsn" (OuterVolumeSpecName: "kube-api-access-9bmsn") pod "c8e8b07d-e9e8-4efd-a05d-f09f78abca00" (UID: "c8e8b07d-e9e8-4efd-a05d-f09f78abca00"). InnerVolumeSpecName "kube-api-access-9bmsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.097675 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da41a595-7e83-406d-b782-de0adf6e3d8d" path="/var/lib/kubelet/pods/da41a595-7e83-406d-b782-de0adf6e3d8d/volumes" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.192361 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e8b07d-e9e8-4efd-a05d-f09f78abca00-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.192437 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bmsn\" (UniqueName: \"kubernetes.io/projected/c8e8b07d-e9e8-4efd-a05d-f09f78abca00-kube-api-access-9bmsn\") on node \"crc\" DevicePath \"\"" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.223282 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8e8b07d-e9e8-4efd-a05d-f09f78abca00-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8e8b07d-e9e8-4efd-a05d-f09f78abca00" (UID: "c8e8b07d-e9e8-4efd-a05d-f09f78abca00"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.294238 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e8b07d-e9e8-4efd-a05d-f09f78abca00-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.312091 4947 generic.go:334] "Generic (PLEG): container finished" podID="c8e8b07d-e9e8-4efd-a05d-f09f78abca00" containerID="e4836fdef9e34e6f13ac541e3aaade10488ce09dbcca0800df64b539d7789fdf" exitCode=0 Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.312195 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpxfq" event={"ID":"c8e8b07d-e9e8-4efd-a05d-f09f78abca00","Type":"ContainerDied","Data":"e4836fdef9e34e6f13ac541e3aaade10488ce09dbcca0800df64b539d7789fdf"} Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.312228 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpxfq" event={"ID":"c8e8b07d-e9e8-4efd-a05d-f09f78abca00","Type":"ContainerDied","Data":"13b192541fa7589b2466360e6399546425f5d80b3b3a89c1761b0ae60a095da2"} Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.312244 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hpxfq" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.312249 4947 scope.go:117] "RemoveContainer" containerID="e4836fdef9e34e6f13ac541e3aaade10488ce09dbcca0800df64b539d7789fdf" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.326384 4947 scope.go:117] "RemoveContainer" containerID="0ad61d18978a538cad1724a2be6e708c3afaaaafc7b49772ebcaa70bca997dc0" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.340055 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hpxfq"] Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.344508 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hpxfq"] Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.365445 4947 scope.go:117] "RemoveContainer" containerID="9b30e7b5190fa58e43a00f9e75f36dc5d963193ed7c3423d9800f1d9ccff24a8" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.381896 4947 scope.go:117] "RemoveContainer" containerID="e4836fdef9e34e6f13ac541e3aaade10488ce09dbcca0800df64b539d7789fdf" Jan 25 00:22:27 crc kubenswrapper[4947]: E0125 00:22:27.382541 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4836fdef9e34e6f13ac541e3aaade10488ce09dbcca0800df64b539d7789fdf\": container with ID starting with e4836fdef9e34e6f13ac541e3aaade10488ce09dbcca0800df64b539d7789fdf not found: ID does not exist" containerID="e4836fdef9e34e6f13ac541e3aaade10488ce09dbcca0800df64b539d7789fdf" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.382585 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4836fdef9e34e6f13ac541e3aaade10488ce09dbcca0800df64b539d7789fdf"} err="failed to get container status \"e4836fdef9e34e6f13ac541e3aaade10488ce09dbcca0800df64b539d7789fdf\": rpc error: code = NotFound desc = could not find container \"e4836fdef9e34e6f13ac541e3aaade10488ce09dbcca0800df64b539d7789fdf\": container with ID starting with e4836fdef9e34e6f13ac541e3aaade10488ce09dbcca0800df64b539d7789fdf not found: ID does not exist" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.382614 4947 scope.go:117] "RemoveContainer" containerID="0ad61d18978a538cad1724a2be6e708c3afaaaafc7b49772ebcaa70bca997dc0" Jan 25 00:22:27 crc kubenswrapper[4947]: E0125 00:22:27.382928 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ad61d18978a538cad1724a2be6e708c3afaaaafc7b49772ebcaa70bca997dc0\": container with ID starting with 0ad61d18978a538cad1724a2be6e708c3afaaaafc7b49772ebcaa70bca997dc0 not found: ID does not exist" containerID="0ad61d18978a538cad1724a2be6e708c3afaaaafc7b49772ebcaa70bca997dc0" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.382967 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ad61d18978a538cad1724a2be6e708c3afaaaafc7b49772ebcaa70bca997dc0"} err="failed to get container status \"0ad61d18978a538cad1724a2be6e708c3afaaaafc7b49772ebcaa70bca997dc0\": rpc error: code = NotFound desc = could not find container \"0ad61d18978a538cad1724a2be6e708c3afaaaafc7b49772ebcaa70bca997dc0\": container with ID starting with 0ad61d18978a538cad1724a2be6e708c3afaaaafc7b49772ebcaa70bca997dc0 not found: ID does not exist" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.382994 4947 scope.go:117] "RemoveContainer" containerID="9b30e7b5190fa58e43a00f9e75f36dc5d963193ed7c3423d9800f1d9ccff24a8" Jan 25 00:22:27 crc kubenswrapper[4947]: E0125 00:22:27.383387 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b30e7b5190fa58e43a00f9e75f36dc5d963193ed7c3423d9800f1d9ccff24a8\": container with ID starting with 9b30e7b5190fa58e43a00f9e75f36dc5d963193ed7c3423d9800f1d9ccff24a8 not found: ID does not exist" containerID="9b30e7b5190fa58e43a00f9e75f36dc5d963193ed7c3423d9800f1d9ccff24a8" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.383420 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b30e7b5190fa58e43a00f9e75f36dc5d963193ed7c3423d9800f1d9ccff24a8"} err="failed to get container status \"9b30e7b5190fa58e43a00f9e75f36dc5d963193ed7c3423d9800f1d9ccff24a8\": rpc error: code = NotFound desc = could not find container \"9b30e7b5190fa58e43a00f9e75f36dc5d963193ed7c3423d9800f1d9ccff24a8\": container with ID starting with 9b30e7b5190fa58e43a00f9e75f36dc5d963193ed7c3423d9800f1d9ccff24a8 not found: ID does not exist" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.655807 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-dlmlc"] Jan 25 00:22:27 crc kubenswrapper[4947]: E0125 00:22:27.656014 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8e8b07d-e9e8-4efd-a05d-f09f78abca00" containerName="registry-server" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.656025 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8e8b07d-e9e8-4efd-a05d-f09f78abca00" containerName="registry-server" Jan 25 00:22:27 crc kubenswrapper[4947]: E0125 00:22:27.656034 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8e8b07d-e9e8-4efd-a05d-f09f78abca00" containerName="extract-utilities" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.656040 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8e8b07d-e9e8-4efd-a05d-f09f78abca00" containerName="extract-utilities" Jan 25 00:22:27 crc kubenswrapper[4947]: E0125 00:22:27.656051 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da41a595-7e83-406d-b782-de0adf6e3d8d" containerName="registry-server" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.656059 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="da41a595-7e83-406d-b782-de0adf6e3d8d" containerName="registry-server" Jan 25 00:22:27 crc kubenswrapper[4947]: E0125 00:22:27.656068 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da41a595-7e83-406d-b782-de0adf6e3d8d" containerName="extract-utilities" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.656074 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="da41a595-7e83-406d-b782-de0adf6e3d8d" containerName="extract-utilities" Jan 25 00:22:27 crc kubenswrapper[4947]: E0125 00:22:27.656086 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8e8b07d-e9e8-4efd-a05d-f09f78abca00" containerName="extract-content" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.656091 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8e8b07d-e9e8-4efd-a05d-f09f78abca00" containerName="extract-content" Jan 25 00:22:27 crc kubenswrapper[4947]: E0125 00:22:27.656101 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da41a595-7e83-406d-b782-de0adf6e3d8d" containerName="extract-content" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.656107 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="da41a595-7e83-406d-b782-de0adf6e3d8d" containerName="extract-content" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.656205 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8e8b07d-e9e8-4efd-a05d-f09f78abca00" containerName="registry-server" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.656224 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="da41a595-7e83-406d-b782-de0adf6e3d8d" containerName="registry-server" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.656577 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-dlmlc" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.659915 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.660569 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.660780 4947 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-77nl2" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.671456 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-dlmlc"] Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.799733 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqt4x\" (UniqueName: \"kubernetes.io/projected/4fe8724a-3a6a-4130-9d5b-e2e3be8d30ed-kube-api-access-lqt4x\") pod \"cert-manager-operator-controller-manager-5446d6888b-dlmlc\" (UID: \"4fe8724a-3a6a-4130-9d5b-e2e3be8d30ed\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-dlmlc" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.799830 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4fe8724a-3a6a-4130-9d5b-e2e3be8d30ed-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-dlmlc\" (UID: \"4fe8724a-3a6a-4130-9d5b-e2e3be8d30ed\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-dlmlc" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.900860 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4fe8724a-3a6a-4130-9d5b-e2e3be8d30ed-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-dlmlc\" (UID: \"4fe8724a-3a6a-4130-9d5b-e2e3be8d30ed\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-dlmlc" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.900932 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqt4x\" (UniqueName: \"kubernetes.io/projected/4fe8724a-3a6a-4130-9d5b-e2e3be8d30ed-kube-api-access-lqt4x\") pod \"cert-manager-operator-controller-manager-5446d6888b-dlmlc\" (UID: \"4fe8724a-3a6a-4130-9d5b-e2e3be8d30ed\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-dlmlc" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.901406 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4fe8724a-3a6a-4130-9d5b-e2e3be8d30ed-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-dlmlc\" (UID: \"4fe8724a-3a6a-4130-9d5b-e2e3be8d30ed\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-dlmlc" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.920550 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqt4x\" (UniqueName: \"kubernetes.io/projected/4fe8724a-3a6a-4130-9d5b-e2e3be8d30ed-kube-api-access-lqt4x\") pod \"cert-manager-operator-controller-manager-5446d6888b-dlmlc\" (UID: \"4fe8724a-3a6a-4130-9d5b-e2e3be8d30ed\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-dlmlc" Jan 25 00:22:28 crc kubenswrapper[4947]: I0125 00:22:28.017286 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-dlmlc" Jan 25 00:22:29 crc kubenswrapper[4947]: I0125 00:22:29.140075 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8e8b07d-e9e8-4efd-a05d-f09f78abca00" path="/var/lib/kubelet/pods/c8e8b07d-e9e8-4efd-a05d-f09f78abca00/volumes" Jan 25 00:22:30 crc kubenswrapper[4947]: I0125 00:22:30.139564 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-dlmlc"] Jan 25 00:22:30 crc kubenswrapper[4947]: W0125 00:22:30.154244 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fe8724a_3a6a_4130_9d5b_e2e3be8d30ed.slice/crio-11f3e96ba64717225b6f67defcec6058ee830d7655b180ecc8af90a8330b9769 WatchSource:0}: Error finding container 11f3e96ba64717225b6f67defcec6058ee830d7655b180ecc8af90a8330b9769: Status 404 returned error can't find the container with id 11f3e96ba64717225b6f67defcec6058ee830d7655b180ecc8af90a8330b9769 Jan 25 00:22:30 crc kubenswrapper[4947]: I0125 00:22:30.350425 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-dlmlc" event={"ID":"4fe8724a-3a6a-4130-9d5b-e2e3be8d30ed","Type":"ContainerStarted","Data":"11f3e96ba64717225b6f67defcec6058ee830d7655b180ecc8af90a8330b9769"} Jan 25 00:22:38 crc kubenswrapper[4947]: I0125 00:22:38.414702 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-dlmlc" event={"ID":"4fe8724a-3a6a-4130-9d5b-e2e3be8d30ed","Type":"ContainerStarted","Data":"573b6d1c112bd77306e078ee314e3fb6e8d6b6016410c9a9201a722d482f7ab0"} Jan 25 00:22:38 crc kubenswrapper[4947]: I0125 00:22:38.431394 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-dlmlc" podStartSLOduration=3.353529467 podStartE2EDuration="11.431372858s" podCreationTimestamp="2026-01-25 00:22:27 +0000 UTC" firstStartedPulling="2026-01-25 00:22:30.157500836 +0000 UTC m=+789.390491266" lastFinishedPulling="2026-01-25 00:22:38.235344207 +0000 UTC m=+797.468334657" observedRunningTime="2026-01-25 00:22:38.429382991 +0000 UTC m=+797.662373431" watchObservedRunningTime="2026-01-25 00:22:38.431372858 +0000 UTC m=+797.664363298" Jan 25 00:22:39 crc kubenswrapper[4947]: I0125 00:22:39.423678 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-2zk49" event={"ID":"fb65215d-4c8c-4191-a224-f49ec8acfaa0","Type":"ContainerStarted","Data":"3932ce9a7a1ad50fc9ed55a9e3e651a626810a513407886aae25c87f11a62988"} Jan 25 00:22:39 crc kubenswrapper[4947]: I0125 00:22:39.426571 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6","Type":"ContainerStarted","Data":"389df8f63a5429aa3e6c47d509692bd8ead951a42c2a2550fc4278810ebd4d4e"} Jan 25 00:22:39 crc kubenswrapper[4947]: I0125 00:22:39.444177 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-5bb49f789d-2zk49" podStartSLOduration=4.999585345 podStartE2EDuration="34.444162529s" podCreationTimestamp="2026-01-25 00:22:05 +0000 UTC" firstStartedPulling="2026-01-25 00:22:08.918830024 +0000 UTC m=+768.151820464" lastFinishedPulling="2026-01-25 00:22:38.363407208 +0000 UTC m=+797.596397648" observedRunningTime="2026-01-25 00:22:39.441973997 +0000 UTC m=+798.674964467" watchObservedRunningTime="2026-01-25 00:22:39.444162529 +0000 UTC m=+798.677152969" Jan 25 00:22:39 crc kubenswrapper[4947]: I0125 00:22:39.613117 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 25 00:22:39 crc kubenswrapper[4947]: I0125 00:22:39.631997 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 25 00:22:41 crc kubenswrapper[4947]: I0125 00:22:41.437073 4947 generic.go:334] "Generic (PLEG): container finished" podID="ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6" containerID="389df8f63a5429aa3e6c47d509692bd8ead951a42c2a2550fc4278810ebd4d4e" exitCode=0 Jan 25 00:22:41 crc kubenswrapper[4947]: I0125 00:22:41.437159 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6","Type":"ContainerDied","Data":"389df8f63a5429aa3e6c47d509692bd8ead951a42c2a2550fc4278810ebd4d4e"} Jan 25 00:22:42 crc kubenswrapper[4947]: I0125 00:22:42.019913 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-wqxxr"] Jan 25 00:22:42 crc kubenswrapper[4947]: I0125 00:22:42.020862 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-wqxxr" Jan 25 00:22:42 crc kubenswrapper[4947]: I0125 00:22:42.025280 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 25 00:22:42 crc kubenswrapper[4947]: I0125 00:22:42.025526 4947 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-468lq" Jan 25 00:22:42 crc kubenswrapper[4947]: I0125 00:22:42.026946 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 25 00:22:42 crc kubenswrapper[4947]: I0125 00:22:42.033222 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpbcm\" (UniqueName: \"kubernetes.io/projected/d860ec8b-2f41-4b81-8868-9b078b55b341-kube-api-access-qpbcm\") pod \"cert-manager-webhook-f4fb5df64-wqxxr\" (UID: \"d860ec8b-2f41-4b81-8868-9b078b55b341\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-wqxxr" Jan 25 00:22:42 crc kubenswrapper[4947]: I0125 00:22:42.033312 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d860ec8b-2f41-4b81-8868-9b078b55b341-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-wqxxr\" (UID: \"d860ec8b-2f41-4b81-8868-9b078b55b341\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-wqxxr" Jan 25 00:22:42 crc kubenswrapper[4947]: I0125 00:22:42.034852 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-wqxxr"] Jan 25 00:22:42 crc kubenswrapper[4947]: I0125 00:22:42.134091 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpbcm\" (UniqueName: \"kubernetes.io/projected/d860ec8b-2f41-4b81-8868-9b078b55b341-kube-api-access-qpbcm\") pod \"cert-manager-webhook-f4fb5df64-wqxxr\" (UID: \"d860ec8b-2f41-4b81-8868-9b078b55b341\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-wqxxr" Jan 25 00:22:42 crc kubenswrapper[4947]: I0125 00:22:42.134178 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d860ec8b-2f41-4b81-8868-9b078b55b341-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-wqxxr\" (UID: \"d860ec8b-2f41-4b81-8868-9b078b55b341\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-wqxxr" Jan 25 00:22:42 crc kubenswrapper[4947]: I0125 00:22:42.152720 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d860ec8b-2f41-4b81-8868-9b078b55b341-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-wqxxr\" (UID: \"d860ec8b-2f41-4b81-8868-9b078b55b341\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-wqxxr" Jan 25 00:22:42 crc kubenswrapper[4947]: I0125 00:22:42.153192 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpbcm\" (UniqueName: \"kubernetes.io/projected/d860ec8b-2f41-4b81-8868-9b078b55b341-kube-api-access-qpbcm\") pod \"cert-manager-webhook-f4fb5df64-wqxxr\" (UID: \"d860ec8b-2f41-4b81-8868-9b078b55b341\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-wqxxr" Jan 25 00:22:42 crc kubenswrapper[4947]: I0125 00:22:42.336695 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-wqxxr" Jan 25 00:22:42 crc kubenswrapper[4947]: I0125 00:22:42.488948 4947 generic.go:334] "Generic (PLEG): container finished" podID="ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6" containerID="7934a5f9bb6f819c97b501ceddb942047b6631a4004801a28c266ec0d76fc45f" exitCode=0 Jan 25 00:22:42 crc kubenswrapper[4947]: I0125 00:22:42.488996 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6","Type":"ContainerDied","Data":"7934a5f9bb6f819c97b501ceddb942047b6631a4004801a28c266ec0d76fc45f"} Jan 25 00:22:42 crc kubenswrapper[4947]: I0125 00:22:42.620297 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-wqxxr"] Jan 25 00:22:43 crc kubenswrapper[4947]: I0125 00:22:43.495050 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-wqxxr" event={"ID":"d860ec8b-2f41-4b81-8868-9b078b55b341","Type":"ContainerStarted","Data":"f9b6b38877a9fb096906b55e70ad4a760f4b259844414f83e1764d189833a252"} Jan 25 00:22:43 crc kubenswrapper[4947]: I0125 00:22:43.499487 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6","Type":"ContainerStarted","Data":"ed65abcb107b3b17187ba899baf6afa900a6101df6f3bed2237909312155847d"} Jan 25 00:22:43 crc kubenswrapper[4947]: I0125 00:22:43.499649 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:43 crc kubenswrapper[4947]: I0125 00:22:43.548804 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=18.138792732 podStartE2EDuration="30.548776356s" podCreationTimestamp="2026-01-25 00:22:13 +0000 UTC" firstStartedPulling="2026-01-25 00:22:26.050647696 +0000 UTC m=+785.283638136" lastFinishedPulling="2026-01-25 00:22:38.46063131 +0000 UTC m=+797.693621760" observedRunningTime="2026-01-25 00:22:43.547747081 +0000 UTC m=+802.780737531" watchObservedRunningTime="2026-01-25 00:22:43.548776356 +0000 UTC m=+802.781766786" Jan 25 00:22:45 crc kubenswrapper[4947]: I0125 00:22:45.160696 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-cqxvz"] Jan 25 00:22:45 crc kubenswrapper[4947]: I0125 00:22:45.161796 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-cqxvz" Jan 25 00:22:45 crc kubenswrapper[4947]: I0125 00:22:45.164645 4947 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-2g48r" Jan 25 00:22:45 crc kubenswrapper[4947]: I0125 00:22:45.180089 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-cqxvz"] Jan 25 00:22:45 crc kubenswrapper[4947]: I0125 00:22:45.249725 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c215f860-08a3-4dbd-b7f2-426286319aa8-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-cqxvz\" (UID: \"c215f860-08a3-4dbd-b7f2-426286319aa8\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-cqxvz" Jan 25 00:22:45 crc kubenswrapper[4947]: I0125 00:22:45.249914 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9jjb\" (UniqueName: \"kubernetes.io/projected/c215f860-08a3-4dbd-b7f2-426286319aa8-kube-api-access-z9jjb\") pod \"cert-manager-cainjector-855d9ccff4-cqxvz\" (UID: \"c215f860-08a3-4dbd-b7f2-426286319aa8\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-cqxvz" Jan 25 00:22:45 crc kubenswrapper[4947]: I0125 00:22:45.351100 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9jjb\" (UniqueName: \"kubernetes.io/projected/c215f860-08a3-4dbd-b7f2-426286319aa8-kube-api-access-z9jjb\") pod \"cert-manager-cainjector-855d9ccff4-cqxvz\" (UID: \"c215f860-08a3-4dbd-b7f2-426286319aa8\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-cqxvz" Jan 25 00:22:45 crc kubenswrapper[4947]: I0125 00:22:45.351180 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c215f860-08a3-4dbd-b7f2-426286319aa8-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-cqxvz\" (UID: \"c215f860-08a3-4dbd-b7f2-426286319aa8\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-cqxvz" Jan 25 00:22:45 crc kubenswrapper[4947]: I0125 00:22:45.374392 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c215f860-08a3-4dbd-b7f2-426286319aa8-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-cqxvz\" (UID: \"c215f860-08a3-4dbd-b7f2-426286319aa8\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-cqxvz" Jan 25 00:22:45 crc kubenswrapper[4947]: I0125 00:22:45.375438 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9jjb\" (UniqueName: \"kubernetes.io/projected/c215f860-08a3-4dbd-b7f2-426286319aa8-kube-api-access-z9jjb\") pod \"cert-manager-cainjector-855d9ccff4-cqxvz\" (UID: \"c215f860-08a3-4dbd-b7f2-426286319aa8\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-cqxvz" Jan 25 00:22:45 crc kubenswrapper[4947]: I0125 00:22:45.482803 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-cqxvz" Jan 25 00:22:46 crc kubenswrapper[4947]: I0125 00:22:46.189238 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-cqxvz"] Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.486169 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.489713 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.491535 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-jd52j" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.491730 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-global-ca" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.491977 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-sys-config" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.493344 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-ca" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.512982 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.606229 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.606272 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/661d7c06-4a71-4c19-8fa1-bdca787b20c1-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.606318 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.606346 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/661d7c06-4a71-4c19-8fa1-bdca787b20c1-builder-dockercfg-jd52j-push\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.606370 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.606428 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.606458 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.606484 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.606502 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/661d7c06-4a71-4c19-8fa1-bdca787b20c1-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.606618 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/661d7c06-4a71-4c19-8fa1-bdca787b20c1-builder-dockercfg-jd52j-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.606748 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bgfk\" (UniqueName: \"kubernetes.io/projected/661d7c06-4a71-4c19-8fa1-bdca787b20c1-kube-api-access-7bgfk\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.606823 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.709017 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/661d7c06-4a71-4c19-8fa1-bdca787b20c1-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.709116 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.709172 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/661d7c06-4a71-4c19-8fa1-bdca787b20c1-builder-dockercfg-jd52j-push\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.709205 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.709228 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/661d7c06-4a71-4c19-8fa1-bdca787b20c1-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.709246 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.709339 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.709439 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/661d7c06-4a71-4c19-8fa1-bdca787b20c1-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.709465 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.709519 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/661d7c06-4a71-4c19-8fa1-bdca787b20c1-builder-dockercfg-jd52j-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.709563 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.709618 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bgfk\" (UniqueName: \"kubernetes.io/projected/661d7c06-4a71-4c19-8fa1-bdca787b20c1-kube-api-access-7bgfk\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.709706 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.709866 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.709885 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.710035 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/661d7c06-4a71-4c19-8fa1-bdca787b20c1-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.710326 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.710376 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.710422 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.710689 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.710802 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.723140 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/661d7c06-4a71-4c19-8fa1-bdca787b20c1-builder-dockercfg-jd52j-push\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.730548 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bgfk\" (UniqueName: \"kubernetes.io/projected/661d7c06-4a71-4c19-8fa1-bdca787b20c1-kube-api-access-7bgfk\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.737121 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/661d7c06-4a71-4c19-8fa1-bdca787b20c1-builder-dockercfg-jd52j-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.814918 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:52 crc kubenswrapper[4947]: I0125 00:22:52.581409 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 25 00:22:52 crc kubenswrapper[4947]: I0125 00:22:52.587799 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"661d7c06-4a71-4c19-8fa1-bdca787b20c1","Type":"ContainerStarted","Data":"87586fe90c2425fe7d5e7682b0325feea1e33f7914939e906cffe4b6a4e486d3"} Jan 25 00:22:52 crc kubenswrapper[4947]: I0125 00:22:52.591819 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-wqxxr" event={"ID":"d860ec8b-2f41-4b81-8868-9b078b55b341","Type":"ContainerStarted","Data":"c834bdca81785468ff5ec19b7492380d39d838d2f05f6330f36e8e7c57d3960a"} Jan 25 00:22:52 crc kubenswrapper[4947]: I0125 00:22:52.592458 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-wqxxr" Jan 25 00:22:52 crc kubenswrapper[4947]: I0125 00:22:52.593467 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-cqxvz" event={"ID":"c215f860-08a3-4dbd-b7f2-426286319aa8","Type":"ContainerStarted","Data":"5e580cfa8b48f9f1f211860e22b8fc368c1b91dbfe0cf5a31c8623b8758686ee"} Jan 25 00:22:52 crc kubenswrapper[4947]: I0125 00:22:52.607617 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-wqxxr" podStartSLOduration=0.852274381 podStartE2EDuration="10.607595296s" podCreationTimestamp="2026-01-25 00:22:42 +0000 UTC" firstStartedPulling="2026-01-25 00:22:42.629717323 +0000 UTC m=+801.862707763" lastFinishedPulling="2026-01-25 00:22:52.385038238 +0000 UTC m=+811.618028678" observedRunningTime="2026-01-25 00:22:52.605225289 +0000 UTC m=+811.838215739" watchObservedRunningTime="2026-01-25 00:22:52.607595296 +0000 UTC m=+811.840585736" Jan 25 00:22:53 crc kubenswrapper[4947]: I0125 00:22:53.600648 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-cqxvz" event={"ID":"c215f860-08a3-4dbd-b7f2-426286319aa8","Type":"ContainerStarted","Data":"8240170dd5cb499d26dc4e187db57d057e181984ec0210b4b6553f70f73aaa2b"} Jan 25 00:22:53 crc kubenswrapper[4947]: I0125 00:22:53.945723 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6" containerName="elasticsearch" probeResult="failure" output=< Jan 25 00:22:53 crc kubenswrapper[4947]: {"timestamp": "2026-01-25T00:22:53+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 25 00:22:53 crc kubenswrapper[4947]: > Jan 25 00:22:57 crc kubenswrapper[4947]: I0125 00:22:57.343393 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-wqxxr" Jan 25 00:22:57 crc kubenswrapper[4947]: I0125 00:22:57.367159 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-cqxvz" podStartSLOduration=11.672482 podStartE2EDuration="12.36711685s" podCreationTimestamp="2026-01-25 00:22:45 +0000 UTC" firstStartedPulling="2026-01-25 00:22:52.26458871 +0000 UTC m=+811.497579150" lastFinishedPulling="2026-01-25 00:22:52.95922353 +0000 UTC m=+812.192214000" observedRunningTime="2026-01-25 00:22:53.627214401 +0000 UTC m=+812.860204851" watchObservedRunningTime="2026-01-25 00:22:57.36711685 +0000 UTC m=+816.600107290" Jan 25 00:22:59 crc kubenswrapper[4947]: I0125 00:22:59.278322 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:23:00 crc kubenswrapper[4947]: I0125 00:23:00.651033 4947 generic.go:334] "Generic (PLEG): container finished" podID="661d7c06-4a71-4c19-8fa1-bdca787b20c1" containerID="29e25cd653e5646a037201713219653ddf735f00af3b32cacab158f9791bc90a" exitCode=0 Jan 25 00:23:00 crc kubenswrapper[4947]: I0125 00:23:00.651160 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"661d7c06-4a71-4c19-8fa1-bdca787b20c1","Type":"ContainerDied","Data":"29e25cd653e5646a037201713219653ddf735f00af3b32cacab158f9791bc90a"} Jan 25 00:23:00 crc kubenswrapper[4947]: I0125 00:23:00.953011 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 25 00:23:00 crc kubenswrapper[4947]: I0125 00:23:00.976936 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-tgcft"] Jan 25 00:23:00 crc kubenswrapper[4947]: I0125 00:23:00.978321 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-tgcft" Jan 25 00:23:00 crc kubenswrapper[4947]: I0125 00:23:00.980874 4947 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-g9rj2" Jan 25 00:23:00 crc kubenswrapper[4947]: I0125 00:23:00.988315 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-tgcft"] Jan 25 00:23:01 crc kubenswrapper[4947]: I0125 00:23:01.075750 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6k72\" (UniqueName: \"kubernetes.io/projected/6beb1442-5e99-4164-8077-50d6eb5dbd44-kube-api-access-q6k72\") pod \"cert-manager-86cb77c54b-tgcft\" (UID: \"6beb1442-5e99-4164-8077-50d6eb5dbd44\") " pod="cert-manager/cert-manager-86cb77c54b-tgcft" Jan 25 00:23:01 crc kubenswrapper[4947]: I0125 00:23:01.075877 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6beb1442-5e99-4164-8077-50d6eb5dbd44-bound-sa-token\") pod \"cert-manager-86cb77c54b-tgcft\" (UID: \"6beb1442-5e99-4164-8077-50d6eb5dbd44\") " pod="cert-manager/cert-manager-86cb77c54b-tgcft" Jan 25 00:23:01 crc kubenswrapper[4947]: I0125 00:23:01.177524 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6k72\" (UniqueName: \"kubernetes.io/projected/6beb1442-5e99-4164-8077-50d6eb5dbd44-kube-api-access-q6k72\") pod \"cert-manager-86cb77c54b-tgcft\" (UID: \"6beb1442-5e99-4164-8077-50d6eb5dbd44\") " pod="cert-manager/cert-manager-86cb77c54b-tgcft" Jan 25 00:23:01 crc kubenswrapper[4947]: I0125 00:23:01.179487 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6beb1442-5e99-4164-8077-50d6eb5dbd44-bound-sa-token\") pod \"cert-manager-86cb77c54b-tgcft\" (UID: \"6beb1442-5e99-4164-8077-50d6eb5dbd44\") " pod="cert-manager/cert-manager-86cb77c54b-tgcft" Jan 25 00:23:01 crc kubenswrapper[4947]: I0125 00:23:01.201963 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6k72\" (UniqueName: \"kubernetes.io/projected/6beb1442-5e99-4164-8077-50d6eb5dbd44-kube-api-access-q6k72\") pod \"cert-manager-86cb77c54b-tgcft\" (UID: \"6beb1442-5e99-4164-8077-50d6eb5dbd44\") " pod="cert-manager/cert-manager-86cb77c54b-tgcft" Jan 25 00:23:01 crc kubenswrapper[4947]: I0125 00:23:01.202089 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6beb1442-5e99-4164-8077-50d6eb5dbd44-bound-sa-token\") pod \"cert-manager-86cb77c54b-tgcft\" (UID: \"6beb1442-5e99-4164-8077-50d6eb5dbd44\") " pod="cert-manager/cert-manager-86cb77c54b-tgcft" Jan 25 00:23:01 crc kubenswrapper[4947]: I0125 00:23:01.309879 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-tgcft" Jan 25 00:23:01 crc kubenswrapper[4947]: I0125 00:23:01.664994 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"661d7c06-4a71-4c19-8fa1-bdca787b20c1","Type":"ContainerStarted","Data":"42d0401113a9cc459f1654c1224909ba635689103f9bf3566db1b01252300549"} Jan 25 00:23:01 crc kubenswrapper[4947]: I0125 00:23:01.665281 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-1-build" podUID="661d7c06-4a71-4c19-8fa1-bdca787b20c1" containerName="docker-build" containerID="cri-o://42d0401113a9cc459f1654c1224909ba635689103f9bf3566db1b01252300549" gracePeriod=30 Jan 25 00:23:01 crc kubenswrapper[4947]: I0125 00:23:01.705676 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-1-build" podStartSLOduration=4.387894255 podStartE2EDuration="11.705655747s" podCreationTimestamp="2026-01-25 00:22:50 +0000 UTC" firstStartedPulling="2026-01-25 00:22:52.582987976 +0000 UTC m=+811.815978416" lastFinishedPulling="2026-01-25 00:22:59.900749468 +0000 UTC m=+819.133739908" observedRunningTime="2026-01-25 00:23:01.70200722 +0000 UTC m=+820.934997660" watchObservedRunningTime="2026-01-25 00:23:01.705655747 +0000 UTC m=+820.938646197" Jan 25 00:23:01 crc kubenswrapper[4947]: I0125 00:23:01.829991 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-tgcft"] Jan 25 00:23:01 crc kubenswrapper[4947]: W0125 00:23:01.841147 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6beb1442_5e99_4164_8077_50d6eb5dbd44.slice/crio-207b9082d9fd6b34ea2385d5a82c34ec6a09602b0c9b39bde3ad7bff4325ee00 WatchSource:0}: Error finding container 207b9082d9fd6b34ea2385d5a82c34ec6a09602b0c9b39bde3ad7bff4325ee00: Status 404 returned error can't find the container with id 207b9082d9fd6b34ea2385d5a82c34ec6a09602b0c9b39bde3ad7bff4325ee00 Jan 25 00:23:02 crc kubenswrapper[4947]: I0125 00:23:02.671978 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-tgcft" event={"ID":"6beb1442-5e99-4164-8077-50d6eb5dbd44","Type":"ContainerStarted","Data":"207b9082d9fd6b34ea2385d5a82c34ec6a09602b0c9b39bde3ad7bff4325ee00"} Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.125066 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.126519 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.129583 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-sys-config" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.129626 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-ca" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.130063 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-global-ca" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.167601 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.209737 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.209812 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.209922 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.209984 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-builder-dockercfg-jd52j-push\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.210008 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.210035 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.210066 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.210086 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.210147 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.210221 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-builder-dockercfg-jd52j-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.210257 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.210304 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc8cj\" (UniqueName: \"kubernetes.io/projected/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-kube-api-access-jc8cj\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.312152 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-builder-dockercfg-jd52j-push\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.312233 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.312288 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.312328 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.312382 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.312410 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.312473 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-builder-dockercfg-jd52j-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.312500 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.312570 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc8cj\" (UniqueName: \"kubernetes.io/projected/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-kube-api-access-jc8cj\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.312632 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.312701 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.312739 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.312828 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.313539 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.313730 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.313743 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.313799 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.314023 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.314086 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.314668 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.314874 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.319949 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-builder-dockercfg-jd52j-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.321251 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-builder-dockercfg-jd52j-push\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.338053 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc8cj\" (UniqueName: \"kubernetes.io/projected/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-kube-api-access-jc8cj\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.450266 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.959149 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 25 00:23:04 crc kubenswrapper[4947]: I0125 00:23:04.687930 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e","Type":"ContainerStarted","Data":"57b1451538b238472d5680ea17ac3804ec620b15075d2890a204aa83384bdd41"} Jan 25 00:23:09 crc kubenswrapper[4947]: I0125 00:23:09.430739 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_661d7c06-4a71-4c19-8fa1-bdca787b20c1/docker-build/0.log" Jan 25 00:23:09 crc kubenswrapper[4947]: I0125 00:23:09.431852 4947 generic.go:334] "Generic (PLEG): container finished" podID="661d7c06-4a71-4c19-8fa1-bdca787b20c1" containerID="42d0401113a9cc459f1654c1224909ba635689103f9bf3566db1b01252300549" exitCode=-1 Jan 25 00:23:09 crc kubenswrapper[4947]: I0125 00:23:09.431893 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"661d7c06-4a71-4c19-8fa1-bdca787b20c1","Type":"ContainerDied","Data":"42d0401113a9cc459f1654c1224909ba635689103f9bf3566db1b01252300549"} Jan 25 00:23:12 crc kubenswrapper[4947]: I0125 00:23:12.455496 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e","Type":"ContainerStarted","Data":"21824eaf8102a5c1782785b66dfb26808429386a84873ff0bff3535456020c43"} Jan 25 00:23:12 crc kubenswrapper[4947]: I0125 00:23:12.462791 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-tgcft" event={"ID":"6beb1442-5e99-4164-8077-50d6eb5dbd44","Type":"ContainerStarted","Data":"ac5038159970cbc4930dbdf37643e6b49b70d5c771b838917117990074ca16be"} Jan 25 00:23:12 crc kubenswrapper[4947]: I0125 00:23:12.510309 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-tgcft" podStartSLOduration=12.510278519 podStartE2EDuration="12.510278519s" podCreationTimestamp="2026-01-25 00:23:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:23:12.482150684 +0000 UTC m=+831.715141134" watchObservedRunningTime="2026-01-25 00:23:12.510278519 +0000 UTC m=+831.743268959" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.584834 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_661d7c06-4a71-4c19-8fa1-bdca787b20c1/docker-build/0.log" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.586729 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"661d7c06-4a71-4c19-8fa1-bdca787b20c1","Type":"ContainerDied","Data":"87586fe90c2425fe7d5e7682b0325feea1e33f7914939e906cffe4b6a4e486d3"} Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.586805 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87586fe90c2425fe7d5e7682b0325feea1e33f7914939e906cffe4b6a4e486d3" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.644178 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_661d7c06-4a71-4c19-8fa1-bdca787b20c1/docker-build/0.log" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.645279 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.802068 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bgfk\" (UniqueName: \"kubernetes.io/projected/661d7c06-4a71-4c19-8fa1-bdca787b20c1-kube-api-access-7bgfk\") pod \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.802183 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-system-configs\") pod \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.802208 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-proxy-ca-bundles\") pod \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.802233 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/661d7c06-4a71-4c19-8fa1-bdca787b20c1-buildcachedir\") pod \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.802272 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-blob-cache\") pod \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.802297 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/661d7c06-4a71-4c19-8fa1-bdca787b20c1-builder-dockercfg-jd52j-push\") pod \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.802343 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-container-storage-root\") pod \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.802394 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-buildworkdir\") pod \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.802444 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-ca-bundles\") pod \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.802465 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/661d7c06-4a71-4c19-8fa1-bdca787b20c1-builder-dockercfg-jd52j-pull\") pod \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.802502 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/661d7c06-4a71-4c19-8fa1-bdca787b20c1-node-pullsecrets\") pod \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.802558 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-container-storage-run\") pod \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.803424 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "661d7c06-4a71-4c19-8fa1-bdca787b20c1" (UID: "661d7c06-4a71-4c19-8fa1-bdca787b20c1"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.803516 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/661d7c06-4a71-4c19-8fa1-bdca787b20c1-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "661d7c06-4a71-4c19-8fa1-bdca787b20c1" (UID: "661d7c06-4a71-4c19-8fa1-bdca787b20c1"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.803696 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/661d7c06-4a71-4c19-8fa1-bdca787b20c1-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "661d7c06-4a71-4c19-8fa1-bdca787b20c1" (UID: "661d7c06-4a71-4c19-8fa1-bdca787b20c1"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.803953 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "661d7c06-4a71-4c19-8fa1-bdca787b20c1" (UID: "661d7c06-4a71-4c19-8fa1-bdca787b20c1"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.804078 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "661d7c06-4a71-4c19-8fa1-bdca787b20c1" (UID: "661d7c06-4a71-4c19-8fa1-bdca787b20c1"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.804638 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "661d7c06-4a71-4c19-8fa1-bdca787b20c1" (UID: "661d7c06-4a71-4c19-8fa1-bdca787b20c1"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.804946 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "661d7c06-4a71-4c19-8fa1-bdca787b20c1" (UID: "661d7c06-4a71-4c19-8fa1-bdca787b20c1"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.808344 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "661d7c06-4a71-4c19-8fa1-bdca787b20c1" (UID: "661d7c06-4a71-4c19-8fa1-bdca787b20c1"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.808927 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "661d7c06-4a71-4c19-8fa1-bdca787b20c1" (UID: "661d7c06-4a71-4c19-8fa1-bdca787b20c1"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.810783 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/661d7c06-4a71-4c19-8fa1-bdca787b20c1-builder-dockercfg-jd52j-pull" (OuterVolumeSpecName: "builder-dockercfg-jd52j-pull") pod "661d7c06-4a71-4c19-8fa1-bdca787b20c1" (UID: "661d7c06-4a71-4c19-8fa1-bdca787b20c1"). InnerVolumeSpecName "builder-dockercfg-jd52j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.814933 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/661d7c06-4a71-4c19-8fa1-bdca787b20c1-builder-dockercfg-jd52j-push" (OuterVolumeSpecName: "builder-dockercfg-jd52j-push") pod "661d7c06-4a71-4c19-8fa1-bdca787b20c1" (UID: "661d7c06-4a71-4c19-8fa1-bdca787b20c1"). InnerVolumeSpecName "builder-dockercfg-jd52j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.817076 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/661d7c06-4a71-4c19-8fa1-bdca787b20c1-kube-api-access-7bgfk" (OuterVolumeSpecName: "kube-api-access-7bgfk") pod "661d7c06-4a71-4c19-8fa1-bdca787b20c1" (UID: "661d7c06-4a71-4c19-8fa1-bdca787b20c1"). InnerVolumeSpecName "kube-api-access-7bgfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.904715 4947 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.904760 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/661d7c06-4a71-4c19-8fa1-bdca787b20c1-builder-dockercfg-jd52j-pull\") on node \"crc\" DevicePath \"\"" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.904775 4947 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/661d7c06-4a71-4c19-8fa1-bdca787b20c1-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.904787 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.904798 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bgfk\" (UniqueName: \"kubernetes.io/projected/661d7c06-4a71-4c19-8fa1-bdca787b20c1-kube-api-access-7bgfk\") on node \"crc\" DevicePath \"\"" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.904809 4947 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.904820 4947 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.904833 4947 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/661d7c06-4a71-4c19-8fa1-bdca787b20c1-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.904844 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/661d7c06-4a71-4c19-8fa1-bdca787b20c1-builder-dockercfg-jd52j-push\") on node \"crc\" DevicePath \"\"" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.904857 4947 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.904868 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.904881 4947 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 25 00:23:15 crc kubenswrapper[4947]: I0125 00:23:15.589989 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:23:15 crc kubenswrapper[4947]: I0125 00:23:15.627458 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 25 00:23:15 crc kubenswrapper[4947]: I0125 00:23:15.643094 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 25 00:23:17 crc kubenswrapper[4947]: I0125 00:23:17.117857 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="661d7c06-4a71-4c19-8fa1-bdca787b20c1" path="/var/lib/kubelet/pods/661d7c06-4a71-4c19-8fa1-bdca787b20c1/volumes" Jan 25 00:23:25 crc kubenswrapper[4947]: I0125 00:23:25.656744 4947 generic.go:334] "Generic (PLEG): container finished" podID="f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" containerID="21824eaf8102a5c1782785b66dfb26808429386a84873ff0bff3535456020c43" exitCode=0 Jan 25 00:23:25 crc kubenswrapper[4947]: I0125 00:23:25.656841 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e","Type":"ContainerDied","Data":"21824eaf8102a5c1782785b66dfb26808429386a84873ff0bff3535456020c43"} Jan 25 00:23:26 crc kubenswrapper[4947]: I0125 00:23:26.664375 4947 generic.go:334] "Generic (PLEG): container finished" podID="f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" containerID="a59fb62c9dc1e7977c32c6df788cb5db1cec709bb8d7251a71d0d7328202f9cd" exitCode=0 Jan 25 00:23:26 crc kubenswrapper[4947]: I0125 00:23:26.664616 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e","Type":"ContainerDied","Data":"a59fb62c9dc1e7977c32c6df788cb5db1cec709bb8d7251a71d0d7328202f9cd"} Jan 25 00:23:26 crc kubenswrapper[4947]: I0125 00:23:26.722870 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_f499c9d2-38e0-4cb5-a5d2-1b0142726c5e/manage-dockerfile/0.log" Jan 25 00:23:27 crc kubenswrapper[4947]: I0125 00:23:27.676372 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e","Type":"ContainerStarted","Data":"2d797b0b6199276b6cd59577ed9b25e9afa5b49c9ace218d218e4e1f4e98c455"} Jan 25 00:23:27 crc kubenswrapper[4947]: I0125 00:23:27.716949 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-2-build" podStartSLOduration=24.716931469 podStartE2EDuration="24.716931469s" podCreationTimestamp="2026-01-25 00:23:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:23:27.7148561 +0000 UTC m=+846.947846560" watchObservedRunningTime="2026-01-25 00:23:27.716931469 +0000 UTC m=+846.949921919" Jan 25 00:23:47 crc kubenswrapper[4947]: I0125 00:23:47.073080 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:23:47 crc kubenswrapper[4947]: I0125 00:23:47.073863 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:24:00 crc kubenswrapper[4947]: I0125 00:24:00.786875 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4mr96"] Jan 25 00:24:00 crc kubenswrapper[4947]: E0125 00:24:00.787937 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="661d7c06-4a71-4c19-8fa1-bdca787b20c1" containerName="manage-dockerfile" Jan 25 00:24:00 crc kubenswrapper[4947]: I0125 00:24:00.787959 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="661d7c06-4a71-4c19-8fa1-bdca787b20c1" containerName="manage-dockerfile" Jan 25 00:24:00 crc kubenswrapper[4947]: E0125 00:24:00.787984 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="661d7c06-4a71-4c19-8fa1-bdca787b20c1" containerName="docker-build" Jan 25 00:24:00 crc kubenswrapper[4947]: I0125 00:24:00.787995 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="661d7c06-4a71-4c19-8fa1-bdca787b20c1" containerName="docker-build" Jan 25 00:24:00 crc kubenswrapper[4947]: I0125 00:24:00.788205 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="661d7c06-4a71-4c19-8fa1-bdca787b20c1" containerName="docker-build" Jan 25 00:24:00 crc kubenswrapper[4947]: I0125 00:24:00.789614 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mr96" Jan 25 00:24:00 crc kubenswrapper[4947]: I0125 00:24:00.795968 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4mr96"] Jan 25 00:24:00 crc kubenswrapper[4947]: I0125 00:24:00.845736 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d01d9565-1bc1-4895-ab51-3c469f07d4c6-catalog-content\") pod \"community-operators-4mr96\" (UID: \"d01d9565-1bc1-4895-ab51-3c469f07d4c6\") " pod="openshift-marketplace/community-operators-4mr96" Jan 25 00:24:00 crc kubenswrapper[4947]: I0125 00:24:00.845814 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwvk9\" (UniqueName: \"kubernetes.io/projected/d01d9565-1bc1-4895-ab51-3c469f07d4c6-kube-api-access-wwvk9\") pod \"community-operators-4mr96\" (UID: \"d01d9565-1bc1-4895-ab51-3c469f07d4c6\") " pod="openshift-marketplace/community-operators-4mr96" Jan 25 00:24:00 crc kubenswrapper[4947]: I0125 00:24:00.845955 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d01d9565-1bc1-4895-ab51-3c469f07d4c6-utilities\") pod \"community-operators-4mr96\" (UID: \"d01d9565-1bc1-4895-ab51-3c469f07d4c6\") " pod="openshift-marketplace/community-operators-4mr96" Jan 25 00:24:00 crc kubenswrapper[4947]: I0125 00:24:00.947338 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwvk9\" (UniqueName: \"kubernetes.io/projected/d01d9565-1bc1-4895-ab51-3c469f07d4c6-kube-api-access-wwvk9\") pod \"community-operators-4mr96\" (UID: \"d01d9565-1bc1-4895-ab51-3c469f07d4c6\") " pod="openshift-marketplace/community-operators-4mr96" Jan 25 00:24:00 crc kubenswrapper[4947]: I0125 00:24:00.947909 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d01d9565-1bc1-4895-ab51-3c469f07d4c6-utilities\") pod \"community-operators-4mr96\" (UID: \"d01d9565-1bc1-4895-ab51-3c469f07d4c6\") " pod="openshift-marketplace/community-operators-4mr96" Jan 25 00:24:00 crc kubenswrapper[4947]: I0125 00:24:00.948480 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d01d9565-1bc1-4895-ab51-3c469f07d4c6-utilities\") pod \"community-operators-4mr96\" (UID: \"d01d9565-1bc1-4895-ab51-3c469f07d4c6\") " pod="openshift-marketplace/community-operators-4mr96" Jan 25 00:24:00 crc kubenswrapper[4947]: I0125 00:24:00.948660 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d01d9565-1bc1-4895-ab51-3c469f07d4c6-catalog-content\") pod \"community-operators-4mr96\" (UID: \"d01d9565-1bc1-4895-ab51-3c469f07d4c6\") " pod="openshift-marketplace/community-operators-4mr96" Jan 25 00:24:00 crc kubenswrapper[4947]: I0125 00:24:00.949018 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d01d9565-1bc1-4895-ab51-3c469f07d4c6-catalog-content\") pod \"community-operators-4mr96\" (UID: \"d01d9565-1bc1-4895-ab51-3c469f07d4c6\") " pod="openshift-marketplace/community-operators-4mr96" Jan 25 00:24:00 crc kubenswrapper[4947]: I0125 00:24:00.976252 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwvk9\" (UniqueName: \"kubernetes.io/projected/d01d9565-1bc1-4895-ab51-3c469f07d4c6-kube-api-access-wwvk9\") pod \"community-operators-4mr96\" (UID: \"d01d9565-1bc1-4895-ab51-3c469f07d4c6\") " pod="openshift-marketplace/community-operators-4mr96" Jan 25 00:24:01 crc kubenswrapper[4947]: I0125 00:24:01.114414 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mr96" Jan 25 00:24:01 crc kubenswrapper[4947]: I0125 00:24:01.391304 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4mr96"] Jan 25 00:24:02 crc kubenswrapper[4947]: I0125 00:24:02.127411 4947 generic.go:334] "Generic (PLEG): container finished" podID="d01d9565-1bc1-4895-ab51-3c469f07d4c6" containerID="1ffe76381fa0fd773a56c6c8064db7fcd61ac5c99771a2a85062c8791b58dc75" exitCode=0 Jan 25 00:24:02 crc kubenswrapper[4947]: I0125 00:24:02.127676 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mr96" event={"ID":"d01d9565-1bc1-4895-ab51-3c469f07d4c6","Type":"ContainerDied","Data":"1ffe76381fa0fd773a56c6c8064db7fcd61ac5c99771a2a85062c8791b58dc75"} Jan 25 00:24:02 crc kubenswrapper[4947]: I0125 00:24:02.127855 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mr96" event={"ID":"d01d9565-1bc1-4895-ab51-3c469f07d4c6","Type":"ContainerStarted","Data":"976eff545a016fc3cc718daaa9571b46942fe16057e80a3eebd96df87ec75e65"} Jan 25 00:24:03 crc kubenswrapper[4947]: I0125 00:24:03.135477 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mr96" event={"ID":"d01d9565-1bc1-4895-ab51-3c469f07d4c6","Type":"ContainerStarted","Data":"6e1eaa0a1619dff5720dacc9a54e6f3666f22f1e999581dd65ae0281a50e0fd2"} Jan 25 00:24:04 crc kubenswrapper[4947]: I0125 00:24:04.144845 4947 generic.go:334] "Generic (PLEG): container finished" podID="d01d9565-1bc1-4895-ab51-3c469f07d4c6" containerID="6e1eaa0a1619dff5720dacc9a54e6f3666f22f1e999581dd65ae0281a50e0fd2" exitCode=0 Jan 25 00:24:04 crc kubenswrapper[4947]: I0125 00:24:04.144908 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mr96" event={"ID":"d01d9565-1bc1-4895-ab51-3c469f07d4c6","Type":"ContainerDied","Data":"6e1eaa0a1619dff5720dacc9a54e6f3666f22f1e999581dd65ae0281a50e0fd2"} Jan 25 00:24:05 crc kubenswrapper[4947]: I0125 00:24:05.153028 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mr96" event={"ID":"d01d9565-1bc1-4895-ab51-3c469f07d4c6","Type":"ContainerStarted","Data":"aa1e3f454f3e0f662f0c473f2ae2d780dea704c1c430dd5489c79186d8db5afd"} Jan 25 00:24:05 crc kubenswrapper[4947]: I0125 00:24:05.175594 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4mr96" podStartSLOduration=2.723664477 podStartE2EDuration="5.175576474s" podCreationTimestamp="2026-01-25 00:24:00 +0000 UTC" firstStartedPulling="2026-01-25 00:24:02.130557151 +0000 UTC m=+881.363547631" lastFinishedPulling="2026-01-25 00:24:04.582469148 +0000 UTC m=+883.815459628" observedRunningTime="2026-01-25 00:24:05.172188953 +0000 UTC m=+884.405179413" watchObservedRunningTime="2026-01-25 00:24:05.175576474 +0000 UTC m=+884.408566914" Jan 25 00:24:11 crc kubenswrapper[4947]: I0125 00:24:11.115819 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4mr96" Jan 25 00:24:11 crc kubenswrapper[4947]: I0125 00:24:11.116315 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4mr96" Jan 25 00:24:11 crc kubenswrapper[4947]: I0125 00:24:11.176862 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4mr96" Jan 25 00:24:11 crc kubenswrapper[4947]: I0125 00:24:11.275472 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4mr96" Jan 25 00:24:11 crc kubenswrapper[4947]: I0125 00:24:11.427420 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4mr96"] Jan 25 00:24:13 crc kubenswrapper[4947]: I0125 00:24:13.217428 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4mr96" podUID="d01d9565-1bc1-4895-ab51-3c469f07d4c6" containerName="registry-server" containerID="cri-o://aa1e3f454f3e0f662f0c473f2ae2d780dea704c1c430dd5489c79186d8db5afd" gracePeriod=2 Jan 25 00:24:13 crc kubenswrapper[4947]: E0125 00:24:13.316275 4947 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd01d9565_1bc1_4895_ab51_3c469f07d4c6.slice/crio-aa1e3f454f3e0f662f0c473f2ae2d780dea704c1c430dd5489c79186d8db5afd.scope\": RecentStats: unable to find data in memory cache]" Jan 25 00:24:17 crc kubenswrapper[4947]: I0125 00:24:17.072334 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:24:17 crc kubenswrapper[4947]: I0125 00:24:17.072702 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:24:17 crc kubenswrapper[4947]: I0125 00:24:17.252712 4947 generic.go:334] "Generic (PLEG): container finished" podID="d01d9565-1bc1-4895-ab51-3c469f07d4c6" containerID="aa1e3f454f3e0f662f0c473f2ae2d780dea704c1c430dd5489c79186d8db5afd" exitCode=0 Jan 25 00:24:17 crc kubenswrapper[4947]: I0125 00:24:17.252927 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mr96" event={"ID":"d01d9565-1bc1-4895-ab51-3c469f07d4c6","Type":"ContainerDied","Data":"aa1e3f454f3e0f662f0c473f2ae2d780dea704c1c430dd5489c79186d8db5afd"} Jan 25 00:24:17 crc kubenswrapper[4947]: I0125 00:24:17.319117 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mr96" Jan 25 00:24:17 crc kubenswrapper[4947]: I0125 00:24:17.379598 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwvk9\" (UniqueName: \"kubernetes.io/projected/d01d9565-1bc1-4895-ab51-3c469f07d4c6-kube-api-access-wwvk9\") pod \"d01d9565-1bc1-4895-ab51-3c469f07d4c6\" (UID: \"d01d9565-1bc1-4895-ab51-3c469f07d4c6\") " Jan 25 00:24:17 crc kubenswrapper[4947]: I0125 00:24:17.379641 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d01d9565-1bc1-4895-ab51-3c469f07d4c6-utilities\") pod \"d01d9565-1bc1-4895-ab51-3c469f07d4c6\" (UID: \"d01d9565-1bc1-4895-ab51-3c469f07d4c6\") " Jan 25 00:24:17 crc kubenswrapper[4947]: I0125 00:24:17.379726 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d01d9565-1bc1-4895-ab51-3c469f07d4c6-catalog-content\") pod \"d01d9565-1bc1-4895-ab51-3c469f07d4c6\" (UID: \"d01d9565-1bc1-4895-ab51-3c469f07d4c6\") " Jan 25 00:24:17 crc kubenswrapper[4947]: I0125 00:24:17.380683 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d01d9565-1bc1-4895-ab51-3c469f07d4c6-utilities" (OuterVolumeSpecName: "utilities") pod "d01d9565-1bc1-4895-ab51-3c469f07d4c6" (UID: "d01d9565-1bc1-4895-ab51-3c469f07d4c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:24:17 crc kubenswrapper[4947]: I0125 00:24:17.385895 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d01d9565-1bc1-4895-ab51-3c469f07d4c6-kube-api-access-wwvk9" (OuterVolumeSpecName: "kube-api-access-wwvk9") pod "d01d9565-1bc1-4895-ab51-3c469f07d4c6" (UID: "d01d9565-1bc1-4895-ab51-3c469f07d4c6"). InnerVolumeSpecName "kube-api-access-wwvk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:24:17 crc kubenswrapper[4947]: I0125 00:24:17.424762 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d01d9565-1bc1-4895-ab51-3c469f07d4c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d01d9565-1bc1-4895-ab51-3c469f07d4c6" (UID: "d01d9565-1bc1-4895-ab51-3c469f07d4c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:24:17 crc kubenswrapper[4947]: I0125 00:24:17.481244 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwvk9\" (UniqueName: \"kubernetes.io/projected/d01d9565-1bc1-4895-ab51-3c469f07d4c6-kube-api-access-wwvk9\") on node \"crc\" DevicePath \"\"" Jan 25 00:24:17 crc kubenswrapper[4947]: I0125 00:24:17.481293 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d01d9565-1bc1-4895-ab51-3c469f07d4c6-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:24:17 crc kubenswrapper[4947]: I0125 00:24:17.481321 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d01d9565-1bc1-4895-ab51-3c469f07d4c6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:24:18 crc kubenswrapper[4947]: I0125 00:24:18.266313 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mr96" event={"ID":"d01d9565-1bc1-4895-ab51-3c469f07d4c6","Type":"ContainerDied","Data":"976eff545a016fc3cc718daaa9571b46942fe16057e80a3eebd96df87ec75e65"} Jan 25 00:24:18 crc kubenswrapper[4947]: I0125 00:24:18.266383 4947 scope.go:117] "RemoveContainer" containerID="aa1e3f454f3e0f662f0c473f2ae2d780dea704c1c430dd5489c79186d8db5afd" Jan 25 00:24:18 crc kubenswrapper[4947]: I0125 00:24:18.266474 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mr96" Jan 25 00:24:18 crc kubenswrapper[4947]: I0125 00:24:18.290820 4947 scope.go:117] "RemoveContainer" containerID="6e1eaa0a1619dff5720dacc9a54e6f3666f22f1e999581dd65ae0281a50e0fd2" Jan 25 00:24:18 crc kubenswrapper[4947]: I0125 00:24:18.305817 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4mr96"] Jan 25 00:24:18 crc kubenswrapper[4947]: I0125 00:24:18.311805 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4mr96"] Jan 25 00:24:18 crc kubenswrapper[4947]: I0125 00:24:18.315794 4947 scope.go:117] "RemoveContainer" containerID="1ffe76381fa0fd773a56c6c8064db7fcd61ac5c99771a2a85062c8791b58dc75" Jan 25 00:24:19 crc kubenswrapper[4947]: I0125 00:24:19.101967 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d01d9565-1bc1-4895-ab51-3c469f07d4c6" path="/var/lib/kubelet/pods/d01d9565-1bc1-4895-ab51-3c469f07d4c6/volumes" Jan 25 00:24:47 crc kubenswrapper[4947]: I0125 00:24:47.072623 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:24:47 crc kubenswrapper[4947]: I0125 00:24:47.073266 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:24:47 crc kubenswrapper[4947]: I0125 00:24:47.073322 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:24:47 crc kubenswrapper[4947]: I0125 00:24:47.073932 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3d54be0f7cb7c92bd5d7a293f8c1b17de01e73a4d2768de922b6d4f49ead1a39"} pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 25 00:24:47 crc kubenswrapper[4947]: I0125 00:24:47.073999 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" containerID="cri-o://3d54be0f7cb7c92bd5d7a293f8c1b17de01e73a4d2768de922b6d4f49ead1a39" gracePeriod=600 Jan 25 00:24:48 crc kubenswrapper[4947]: I0125 00:24:48.494476 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerDied","Data":"3d54be0f7cb7c92bd5d7a293f8c1b17de01e73a4d2768de922b6d4f49ead1a39"} Jan 25 00:24:48 crc kubenswrapper[4947]: I0125 00:24:48.495256 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerID="3d54be0f7cb7c92bd5d7a293f8c1b17de01e73a4d2768de922b6d4f49ead1a39" exitCode=0 Jan 25 00:24:48 crc kubenswrapper[4947]: I0125 00:24:48.495330 4947 scope.go:117] "RemoveContainer" containerID="f64176709d5607e82072c383022bad7109ba8a5dcfa7f1ccb95f13edf0e8c935" Jan 25 00:24:48 crc kubenswrapper[4947]: I0125 00:24:48.495367 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerStarted","Data":"16e9b3158d1aebf618ce87b5aa5158940b299b1ba44d29893c13b023abe239b1"} Jan 25 00:25:14 crc kubenswrapper[4947]: I0125 00:25:14.755652 4947 generic.go:334] "Generic (PLEG): container finished" podID="f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" containerID="2d797b0b6199276b6cd59577ed9b25e9afa5b49c9ace218d218e4e1f4e98c455" exitCode=0 Jan 25 00:25:14 crc kubenswrapper[4947]: I0125 00:25:14.755738 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e","Type":"ContainerDied","Data":"2d797b0b6199276b6cd59577ed9b25e9afa5b49c9ace218d218e4e1f4e98c455"} Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.186739 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.254532 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-builder-dockercfg-jd52j-pull\") pod \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.254617 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-container-storage-run\") pod \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.254664 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jc8cj\" (UniqueName: \"kubernetes.io/projected/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-kube-api-access-jc8cj\") pod \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.254709 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-blob-cache\") pod \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.256055 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" (UID: "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.254802 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-buildworkdir\") pod \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.258089 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-system-configs\") pod \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.258195 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-proxy-ca-bundles\") pod \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.258231 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-buildcachedir\") pod \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.258268 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-builder-dockercfg-jd52j-push\") pod \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.258335 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" (UID: "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.258381 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-ca-bundles\") pod \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.258437 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-node-pullsecrets\") pod \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.258558 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-container-storage-root\") pod \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.258645 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" (UID: "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.258653 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" (UID: "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.259172 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" (UID: "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.259231 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" (UID: "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.259686 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.259716 4947 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.259730 4947 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.259743 4947 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.259754 4947 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.259767 4947 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.262590 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-builder-dockercfg-jd52j-pull" (OuterVolumeSpecName: "builder-dockercfg-jd52j-pull") pod "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" (UID: "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e"). InnerVolumeSpecName "builder-dockercfg-jd52j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.262615 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-builder-dockercfg-jd52j-push" (OuterVolumeSpecName: "builder-dockercfg-jd52j-push") pod "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" (UID: "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e"). InnerVolumeSpecName "builder-dockercfg-jd52j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.268767 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-kube-api-access-jc8cj" (OuterVolumeSpecName: "kube-api-access-jc8cj") pod "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" (UID: "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e"). InnerVolumeSpecName "kube-api-access-jc8cj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.297330 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" (UID: "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.361173 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-builder-dockercfg-jd52j-pull\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.361205 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jc8cj\" (UniqueName: \"kubernetes.io/projected/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-kube-api-access-jc8cj\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.361215 4947 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.361227 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-builder-dockercfg-jd52j-push\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.432735 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" (UID: "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.463396 4947 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.780645 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e","Type":"ContainerDied","Data":"57b1451538b238472d5680ea17ac3804ec620b15075d2890a204aa83384bdd41"} Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.780728 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57b1451538b238472d5680ea17ac3804ec620b15075d2890a204aa83384bdd41" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.780867 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:25:18 crc kubenswrapper[4947]: I0125 00:25:18.687773 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" (UID: "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:25:18 crc kubenswrapper[4947]: I0125 00:25:18.704043 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:20 crc kubenswrapper[4947]: I0125 00:25:20.930719 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 25 00:25:20 crc kubenswrapper[4947]: E0125 00:25:20.931059 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" containerName="manage-dockerfile" Jan 25 00:25:20 crc kubenswrapper[4947]: I0125 00:25:20.931079 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" containerName="manage-dockerfile" Jan 25 00:25:20 crc kubenswrapper[4947]: E0125 00:25:20.931099 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" containerName="docker-build" Jan 25 00:25:20 crc kubenswrapper[4947]: I0125 00:25:20.931111 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" containerName="docker-build" Jan 25 00:25:20 crc kubenswrapper[4947]: E0125 00:25:20.931160 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" containerName="git-clone" Jan 25 00:25:20 crc kubenswrapper[4947]: I0125 00:25:20.931173 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" containerName="git-clone" Jan 25 00:25:20 crc kubenswrapper[4947]: E0125 00:25:20.931195 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d01d9565-1bc1-4895-ab51-3c469f07d4c6" containerName="registry-server" Jan 25 00:25:20 crc kubenswrapper[4947]: I0125 00:25:20.931208 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="d01d9565-1bc1-4895-ab51-3c469f07d4c6" containerName="registry-server" Jan 25 00:25:20 crc kubenswrapper[4947]: E0125 00:25:20.931227 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d01d9565-1bc1-4895-ab51-3c469f07d4c6" containerName="extract-utilities" Jan 25 00:25:20 crc kubenswrapper[4947]: I0125 00:25:20.931239 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="d01d9565-1bc1-4895-ab51-3c469f07d4c6" containerName="extract-utilities" Jan 25 00:25:20 crc kubenswrapper[4947]: E0125 00:25:20.931256 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d01d9565-1bc1-4895-ab51-3c469f07d4c6" containerName="extract-content" Jan 25 00:25:20 crc kubenswrapper[4947]: I0125 00:25:20.931268 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="d01d9565-1bc1-4895-ab51-3c469f07d4c6" containerName="extract-content" Jan 25 00:25:20 crc kubenswrapper[4947]: I0125 00:25:20.931442 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="d01d9565-1bc1-4895-ab51-3c469f07d4c6" containerName="registry-server" Jan 25 00:25:20 crc kubenswrapper[4947]: I0125 00:25:20.931458 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" containerName="docker-build" Jan 25 00:25:20 crc kubenswrapper[4947]: I0125 00:25:20.932472 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:20 crc kubenswrapper[4947]: I0125 00:25:20.935774 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-ca" Jan 25 00:25:20 crc kubenswrapper[4947]: I0125 00:25:20.936098 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-sys-config" Jan 25 00:25:20 crc kubenswrapper[4947]: I0125 00:25:20.937391 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-global-ca" Jan 25 00:25:20 crc kubenswrapper[4947]: I0125 00:25:20.937395 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-jd52j" Jan 25 00:25:20 crc kubenswrapper[4947]: I0125 00:25:20.953608 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.048995 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-builder-dockercfg-jd52j-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.049103 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.049200 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.049259 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.049347 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.049411 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.049464 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-builder-dockercfg-jd52j-push\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.049530 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.049693 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.049822 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd9th\" (UniqueName: \"kubernetes.io/projected/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-kube-api-access-vd9th\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.049897 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.049977 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.151275 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.151361 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.151481 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-builder-dockercfg-jd52j-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.151529 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.151565 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.151609 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.151686 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.151745 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.151770 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.151761 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.151818 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-builder-dockercfg-jd52j-push\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.151895 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.151949 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.152004 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd9th\" (UniqueName: \"kubernetes.io/projected/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-kube-api-access-vd9th\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.152448 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.152599 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.152687 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.152758 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.153159 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-ca" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.156149 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-jd52j" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.156265 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-sys-config" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.156589 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-global-ca" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.163142 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.163450 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.163647 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.171618 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-builder-dockercfg-jd52j-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.171909 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-builder-dockercfg-jd52j-push\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.181055 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd9th\" (UniqueName: \"kubernetes.io/projected/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-kube-api-access-vd9th\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.248605 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: W0125 00:25:21.528946 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e46fc8f_1d0e_4901_baf3_35b9c3d210e0.slice/crio-c60a3d6c1b97fcd7d4338672133bef50222a63be5f9dcf1ac6181b7bd30b02ab WatchSource:0}: Error finding container c60a3d6c1b97fcd7d4338672133bef50222a63be5f9dcf1ac6181b7bd30b02ab: Status 404 returned error can't find the container with id c60a3d6c1b97fcd7d4338672133bef50222a63be5f9dcf1ac6181b7bd30b02ab Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.531392 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.825100 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0","Type":"ContainerStarted","Data":"c60a3d6c1b97fcd7d4338672133bef50222a63be5f9dcf1ac6181b7bd30b02ab"} Jan 25 00:25:22 crc kubenswrapper[4947]: I0125 00:25:22.836935 4947 generic.go:334] "Generic (PLEG): container finished" podID="4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" containerID="8859bd2c23cc381a47cfe2e04937484855d70dd800833fd36744559428ce5555" exitCode=0 Jan 25 00:25:22 crc kubenswrapper[4947]: I0125 00:25:22.837000 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0","Type":"ContainerDied","Data":"8859bd2c23cc381a47cfe2e04937484855d70dd800833fd36744559428ce5555"} Jan 25 00:25:23 crc kubenswrapper[4947]: I0125 00:25:23.847773 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0","Type":"ContainerStarted","Data":"028bccab312ad8a58a8f5190d26bb368c53b9a76f922c33081e772b4e9d903bf"} Jan 25 00:25:23 crc kubenswrapper[4947]: I0125 00:25:23.887789 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-1-build" podStartSLOduration=3.887751905 podStartE2EDuration="3.887751905s" podCreationTimestamp="2026-01-25 00:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:25:23.882250403 +0000 UTC m=+963.115240883" watchObservedRunningTime="2026-01-25 00:25:23.887751905 +0000 UTC m=+963.120742385" Jan 25 00:25:31 crc kubenswrapper[4947]: I0125 00:25:31.475089 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 25 00:25:31 crc kubenswrapper[4947]: I0125 00:25:31.476430 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/smart-gateway-operator-1-build" podUID="4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" containerName="docker-build" containerID="cri-o://028bccab312ad8a58a8f5190d26bb368c53b9a76f922c33081e772b4e9d903bf" gracePeriod=30 Jan 25 00:25:31 crc kubenswrapper[4947]: I0125 00:25:31.904753 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_4e46fc8f-1d0e-4901-baf3-35b9c3d210e0/docker-build/0.log" Jan 25 00:25:31 crc kubenswrapper[4947]: I0125 00:25:31.905843 4947 generic.go:334] "Generic (PLEG): container finished" podID="4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" containerID="028bccab312ad8a58a8f5190d26bb368c53b9a76f922c33081e772b4e9d903bf" exitCode=1 Jan 25 00:25:31 crc kubenswrapper[4947]: I0125 00:25:31.905894 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0","Type":"ContainerDied","Data":"028bccab312ad8a58a8f5190d26bb368c53b9a76f922c33081e772b4e9d903bf"} Jan 25 00:25:31 crc kubenswrapper[4947]: I0125 00:25:31.905958 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0","Type":"ContainerDied","Data":"c60a3d6c1b97fcd7d4338672133bef50222a63be5f9dcf1ac6181b7bd30b02ab"} Jan 25 00:25:31 crc kubenswrapper[4947]: I0125 00:25:31.905980 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c60a3d6c1b97fcd7d4338672133bef50222a63be5f9dcf1ac6181b7bd30b02ab" Jan 25 00:25:31 crc kubenswrapper[4947]: I0125 00:25:31.926180 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_4e46fc8f-1d0e-4901-baf3-35b9c3d210e0/docker-build/0.log" Jan 25 00:25:31 crc kubenswrapper[4947]: I0125 00:25:31.927348 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.113326 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-builder-dockercfg-jd52j-push\") pod \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.113443 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-ca-bundles\") pod \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.113515 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-node-pullsecrets\") pod \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.113554 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-buildcachedir\") pod \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.113593 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-system-configs\") pod \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.113628 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-builder-dockercfg-jd52j-pull\") pod \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.113665 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" (UID: "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.113659 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" (UID: "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.113693 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-proxy-ca-bundles\") pod \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.113832 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-container-storage-run\") pod \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.113880 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd9th\" (UniqueName: \"kubernetes.io/projected/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-kube-api-access-vd9th\") pod \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.113933 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-container-storage-root\") pod \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.114035 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-blob-cache\") pod \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.114081 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-buildworkdir\") pod \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.114839 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" (UID: "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.115213 4947 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.115272 4947 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.115453 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" (UID: "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.116011 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" (UID: "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.116109 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" (UID: "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.116387 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" (UID: "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.121258 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-builder-dockercfg-jd52j-push" (OuterVolumeSpecName: "builder-dockercfg-jd52j-push") pod "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" (UID: "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0"). InnerVolumeSpecName "builder-dockercfg-jd52j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.122937 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-builder-dockercfg-jd52j-pull" (OuterVolumeSpecName: "builder-dockercfg-jd52j-pull") pod "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" (UID: "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0"). InnerVolumeSpecName "builder-dockercfg-jd52j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.124028 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-kube-api-access-vd9th" (OuterVolumeSpecName: "kube-api-access-vd9th") pod "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" (UID: "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0"). InnerVolumeSpecName "kube-api-access-vd9th". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.216853 4947 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.216913 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-builder-dockercfg-jd52j-push\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.216937 4947 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.216957 4947 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.216977 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-builder-dockercfg-jd52j-pull\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.216996 4947 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.217015 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.217033 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vd9th\" (UniqueName: \"kubernetes.io/projected/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-kube-api-access-vd9th\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.312853 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" (UID: "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.318319 4947 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.573349 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" (UID: "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.622789 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.914536 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.966364 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.983654 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.102043 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" path="/var/lib/kubelet/pods/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0/volumes" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.121386 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 25 00:25:33 crc kubenswrapper[4947]: E0125 00:25:33.121674 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" containerName="manage-dockerfile" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.121696 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" containerName="manage-dockerfile" Jan 25 00:25:33 crc kubenswrapper[4947]: E0125 00:25:33.121712 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" containerName="docker-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.121721 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" containerName="docker-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.121911 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" containerName="docker-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.123072 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.125988 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-global-ca" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.126308 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-jd52j" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.126525 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-sys-config" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.127241 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-ca" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.161381 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.232055 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24dd009b-58df-485b-b901-4a51266605a5-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.232196 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.232230 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24dd009b-58df-485b-b901-4a51266605a5-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.232274 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/24dd009b-58df-485b-b901-4a51266605a5-builder-dockercfg-jd52j-push\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.232296 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkr5w\" (UniqueName: \"kubernetes.io/projected/24dd009b-58df-485b-b901-4a51266605a5-kube-api-access-dkr5w\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.232829 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/24dd009b-58df-485b-b901-4a51266605a5-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.232941 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.233055 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/24dd009b-58df-485b-b901-4a51266605a5-builder-dockercfg-jd52j-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.233173 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/24dd009b-58df-485b-b901-4a51266605a5-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.233293 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.233376 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/24dd009b-58df-485b-b901-4a51266605a5-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.233451 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.335742 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24dd009b-58df-485b-b901-4a51266605a5-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.335809 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.335848 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24dd009b-58df-485b-b901-4a51266605a5-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.335887 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/24dd009b-58df-485b-b901-4a51266605a5-builder-dockercfg-jd52j-push\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.335915 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkr5w\" (UniqueName: \"kubernetes.io/projected/24dd009b-58df-485b-b901-4a51266605a5-kube-api-access-dkr5w\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.335938 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/24dd009b-58df-485b-b901-4a51266605a5-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.335972 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.335998 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/24dd009b-58df-485b-b901-4a51266605a5-builder-dockercfg-jd52j-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.336027 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/24dd009b-58df-485b-b901-4a51266605a5-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.336071 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.336099 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/24dd009b-58df-485b-b901-4a51266605a5-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.336151 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.336378 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.336478 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.336624 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.337351 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.337433 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24dd009b-58df-485b-b901-4a51266605a5-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.337497 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24dd009b-58df-485b-b901-4a51266605a5-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.337348 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/24dd009b-58df-485b-b901-4a51266605a5-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.337379 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/24dd009b-58df-485b-b901-4a51266605a5-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.338546 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/24dd009b-58df-485b-b901-4a51266605a5-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.343166 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/24dd009b-58df-485b-b901-4a51266605a5-builder-dockercfg-jd52j-push\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.343174 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/24dd009b-58df-485b-b901-4a51266605a5-builder-dockercfg-jd52j-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.362303 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkr5w\" (UniqueName: \"kubernetes.io/projected/24dd009b-58df-485b-b901-4a51266605a5-kube-api-access-dkr5w\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.441388 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.671602 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.923382 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"24dd009b-58df-485b-b901-4a51266605a5","Type":"ContainerStarted","Data":"316740fcb3496165d3a51b4cf681d66241a263c85170f99c944e7e013a8dc739"} Jan 25 00:25:34 crc kubenswrapper[4947]: I0125 00:25:34.934582 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"24dd009b-58df-485b-b901-4a51266605a5","Type":"ContainerStarted","Data":"de1a6e7022681b695c7adc8df24c0c416eef349bd035dbd66a5282f51988919d"} Jan 25 00:25:35 crc kubenswrapper[4947]: E0125 00:25:35.103258 4947 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.163:55100->38.102.83.163:33315: write tcp 38.102.83.163:55100->38.102.83.163:33315: write: connection reset by peer Jan 25 00:25:35 crc kubenswrapper[4947]: I0125 00:25:35.944372 4947 generic.go:334] "Generic (PLEG): container finished" podID="24dd009b-58df-485b-b901-4a51266605a5" containerID="de1a6e7022681b695c7adc8df24c0c416eef349bd035dbd66a5282f51988919d" exitCode=0 Jan 25 00:25:35 crc kubenswrapper[4947]: I0125 00:25:35.944422 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"24dd009b-58df-485b-b901-4a51266605a5","Type":"ContainerDied","Data":"de1a6e7022681b695c7adc8df24c0c416eef349bd035dbd66a5282f51988919d"} Jan 25 00:25:36 crc kubenswrapper[4947]: I0125 00:25:36.955173 4947 generic.go:334] "Generic (PLEG): container finished" podID="24dd009b-58df-485b-b901-4a51266605a5" containerID="654cbcf4fecd28f2a127e8c0c0ea41930b9994d3b82a1dd577d5782a16c50141" exitCode=0 Jan 25 00:25:36 crc kubenswrapper[4947]: I0125 00:25:36.955252 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"24dd009b-58df-485b-b901-4a51266605a5","Type":"ContainerDied","Data":"654cbcf4fecd28f2a127e8c0c0ea41930b9994d3b82a1dd577d5782a16c50141"} Jan 25 00:25:37 crc kubenswrapper[4947]: I0125 00:25:37.007706 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_24dd009b-58df-485b-b901-4a51266605a5/manage-dockerfile/0.log" Jan 25 00:25:37 crc kubenswrapper[4947]: I0125 00:25:37.973593 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"24dd009b-58df-485b-b901-4a51266605a5","Type":"ContainerStarted","Data":"353fad3d8231255be347fabf3161b03de5be600c5db257bc7c559537402b9c78"} Jan 25 00:25:38 crc kubenswrapper[4947]: I0125 00:25:38.037446 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-2-build" podStartSLOduration=5.037426023 podStartE2EDuration="5.037426023s" podCreationTimestamp="2026-01-25 00:25:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:25:38.029316449 +0000 UTC m=+977.262306889" watchObservedRunningTime="2026-01-25 00:25:38.037426023 +0000 UTC m=+977.270416473" Jan 25 00:27:00 crc kubenswrapper[4947]: I0125 00:27:00.563625 4947 generic.go:334] "Generic (PLEG): container finished" podID="24dd009b-58df-485b-b901-4a51266605a5" containerID="353fad3d8231255be347fabf3161b03de5be600c5db257bc7c559537402b9c78" exitCode=0 Jan 25 00:27:00 crc kubenswrapper[4947]: I0125 00:27:00.563722 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"24dd009b-58df-485b-b901-4a51266605a5","Type":"ContainerDied","Data":"353fad3d8231255be347fabf3161b03de5be600c5db257bc7c559537402b9c78"} Jan 25 00:27:01 crc kubenswrapper[4947]: I0125 00:27:01.881947 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.041533 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24dd009b-58df-485b-b901-4a51266605a5-build-ca-bundles\") pod \"24dd009b-58df-485b-b901-4a51266605a5\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.041625 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24dd009b-58df-485b-b901-4a51266605a5-build-proxy-ca-bundles\") pod \"24dd009b-58df-485b-b901-4a51266605a5\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.041660 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-build-blob-cache\") pod \"24dd009b-58df-485b-b901-4a51266605a5\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.041700 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-container-storage-root\") pod \"24dd009b-58df-485b-b901-4a51266605a5\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.041747 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/24dd009b-58df-485b-b901-4a51266605a5-builder-dockercfg-jd52j-push\") pod \"24dd009b-58df-485b-b901-4a51266605a5\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.041813 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkr5w\" (UniqueName: \"kubernetes.io/projected/24dd009b-58df-485b-b901-4a51266605a5-kube-api-access-dkr5w\") pod \"24dd009b-58df-485b-b901-4a51266605a5\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.041843 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/24dd009b-58df-485b-b901-4a51266605a5-build-system-configs\") pod \"24dd009b-58df-485b-b901-4a51266605a5\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.041930 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-container-storage-run\") pod \"24dd009b-58df-485b-b901-4a51266605a5\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.041980 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/24dd009b-58df-485b-b901-4a51266605a5-buildcachedir\") pod \"24dd009b-58df-485b-b901-4a51266605a5\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.042017 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-buildworkdir\") pod \"24dd009b-58df-485b-b901-4a51266605a5\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.042068 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/24dd009b-58df-485b-b901-4a51266605a5-builder-dockercfg-jd52j-pull\") pod \"24dd009b-58df-485b-b901-4a51266605a5\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.042117 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/24dd009b-58df-485b-b901-4a51266605a5-node-pullsecrets\") pod \"24dd009b-58df-485b-b901-4a51266605a5\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.042632 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24dd009b-58df-485b-b901-4a51266605a5-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "24dd009b-58df-485b-b901-4a51266605a5" (UID: "24dd009b-58df-485b-b901-4a51266605a5"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.042682 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24dd009b-58df-485b-b901-4a51266605a5-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "24dd009b-58df-485b-b901-4a51266605a5" (UID: "24dd009b-58df-485b-b901-4a51266605a5"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.043035 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24dd009b-58df-485b-b901-4a51266605a5-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "24dd009b-58df-485b-b901-4a51266605a5" (UID: "24dd009b-58df-485b-b901-4a51266605a5"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.044011 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "24dd009b-58df-485b-b901-4a51266605a5" (UID: "24dd009b-58df-485b-b901-4a51266605a5"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.044730 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24dd009b-58df-485b-b901-4a51266605a5-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "24dd009b-58df-485b-b901-4a51266605a5" (UID: "24dd009b-58df-485b-b901-4a51266605a5"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.045331 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24dd009b-58df-485b-b901-4a51266605a5-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "24dd009b-58df-485b-b901-4a51266605a5" (UID: "24dd009b-58df-485b-b901-4a51266605a5"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.048779 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "24dd009b-58df-485b-b901-4a51266605a5" (UID: "24dd009b-58df-485b-b901-4a51266605a5"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.049406 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24dd009b-58df-485b-b901-4a51266605a5-builder-dockercfg-jd52j-push" (OuterVolumeSpecName: "builder-dockercfg-jd52j-push") pod "24dd009b-58df-485b-b901-4a51266605a5" (UID: "24dd009b-58df-485b-b901-4a51266605a5"). InnerVolumeSpecName "builder-dockercfg-jd52j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.050341 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24dd009b-58df-485b-b901-4a51266605a5-kube-api-access-dkr5w" (OuterVolumeSpecName: "kube-api-access-dkr5w") pod "24dd009b-58df-485b-b901-4a51266605a5" (UID: "24dd009b-58df-485b-b901-4a51266605a5"). InnerVolumeSpecName "kube-api-access-dkr5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.050713 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24dd009b-58df-485b-b901-4a51266605a5-builder-dockercfg-jd52j-pull" (OuterVolumeSpecName: "builder-dockercfg-jd52j-pull") pod "24dd009b-58df-485b-b901-4a51266605a5" (UID: "24dd009b-58df-485b-b901-4a51266605a5"). InnerVolumeSpecName "builder-dockercfg-jd52j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.144622 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/24dd009b-58df-485b-b901-4a51266605a5-builder-dockercfg-jd52j-pull\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.144671 4947 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/24dd009b-58df-485b-b901-4a51266605a5-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.144684 4947 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24dd009b-58df-485b-b901-4a51266605a5-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.144695 4947 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24dd009b-58df-485b-b901-4a51266605a5-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.144708 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/24dd009b-58df-485b-b901-4a51266605a5-builder-dockercfg-jd52j-push\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.144720 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkr5w\" (UniqueName: \"kubernetes.io/projected/24dd009b-58df-485b-b901-4a51266605a5-kube-api-access-dkr5w\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.144731 4947 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/24dd009b-58df-485b-b901-4a51266605a5-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.144742 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.144753 4947 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/24dd009b-58df-485b-b901-4a51266605a5-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.144767 4947 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.234909 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "24dd009b-58df-485b-b901-4a51266605a5" (UID: "24dd009b-58df-485b-b901-4a51266605a5"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.246747 4947 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.587467 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"24dd009b-58df-485b-b901-4a51266605a5","Type":"ContainerDied","Data":"316740fcb3496165d3a51b4cf681d66241a263c85170f99c944e7e013a8dc739"} Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.587543 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="316740fcb3496165d3a51b4cf681d66241a263c85170f99c944e7e013a8dc739" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.587621 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:27:03 crc kubenswrapper[4947]: I0125 00:27:03.976626 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "24dd009b-58df-485b-b901-4a51266605a5" (UID: "24dd009b-58df-485b-b901-4a51266605a5"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:27:04 crc kubenswrapper[4947]: I0125 00:27:04.075722 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.802006 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 25 00:27:06 crc kubenswrapper[4947]: E0125 00:27:06.802586 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24dd009b-58df-485b-b901-4a51266605a5" containerName="docker-build" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.802598 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="24dd009b-58df-485b-b901-4a51266605a5" containerName="docker-build" Jan 25 00:27:06 crc kubenswrapper[4947]: E0125 00:27:06.802606 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24dd009b-58df-485b-b901-4a51266605a5" containerName="manage-dockerfile" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.802612 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="24dd009b-58df-485b-b901-4a51266605a5" containerName="manage-dockerfile" Jan 25 00:27:06 crc kubenswrapper[4947]: E0125 00:27:06.802635 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24dd009b-58df-485b-b901-4a51266605a5" containerName="git-clone" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.802642 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="24dd009b-58df-485b-b901-4a51266605a5" containerName="git-clone" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.802740 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="24dd009b-58df-485b-b901-4a51266605a5" containerName="docker-build" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.803340 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.805021 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-ca" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.805149 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-global-ca" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.805084 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-jd52j" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.812531 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-sys-config" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.813654 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.915229 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbnkq\" (UniqueName: \"kubernetes.io/projected/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-kube-api-access-sbnkq\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.915297 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-buildworkdir\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.915327 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-container-storage-root\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.915349 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-container-storage-run\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.915427 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-buildcachedir\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.915474 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.915510 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-builder-dockercfg-jd52j-pull\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.915535 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.915570 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.915621 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.915650 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-builder-dockercfg-jd52j-push\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.915679 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-system-configs\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.018856 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-buildworkdir\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.018942 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbnkq\" (UniqueName: \"kubernetes.io/projected/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-kube-api-access-sbnkq\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.019005 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-container-storage-root\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.019050 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-container-storage-run\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.019176 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-buildcachedir\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.019224 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.019302 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-builder-dockercfg-jd52j-pull\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.019375 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.019442 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.019520 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.019581 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-builder-dockercfg-jd52j-push\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.019656 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-system-configs\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.020572 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.020789 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.020876 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-container-storage-root\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.020989 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-buildworkdir\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.021045 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-system-configs\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.021656 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-buildcachedir\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.032061 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.034475 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.240404 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-container-storage-run\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.241326 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-builder-dockercfg-jd52j-push\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.241419 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-builder-dockercfg-jd52j-pull\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.246508 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbnkq\" (UniqueName: \"kubernetes.io/projected/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-kube-api-access-sbnkq\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.418907 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.687024 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 25 00:27:08 crc kubenswrapper[4947]: I0125 00:27:08.654461 4947 generic.go:334] "Generic (PLEG): container finished" podID="3b32a95f-4c19-445e-b87b-cbe8bc5f201a" containerID="4ec22a781a4580f96f8bf5e0deaacc392abbbdc89cea199a4e3ae120a1042ef0" exitCode=0 Jan 25 00:27:08 crc kubenswrapper[4947]: I0125 00:27:08.654544 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"3b32a95f-4c19-445e-b87b-cbe8bc5f201a","Type":"ContainerDied","Data":"4ec22a781a4580f96f8bf5e0deaacc392abbbdc89cea199a4e3ae120a1042ef0"} Jan 25 00:27:08 crc kubenswrapper[4947]: I0125 00:27:08.654915 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"3b32a95f-4c19-445e-b87b-cbe8bc5f201a","Type":"ContainerStarted","Data":"1a28b2860ddc19f895a721896df139ceca9edbb20b78b9c62eb78485a1f6d3b6"} Jan 25 00:27:09 crc kubenswrapper[4947]: I0125 00:27:09.673441 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"3b32a95f-4c19-445e-b87b-cbe8bc5f201a","Type":"ContainerStarted","Data":"269062e4b582f0e1ad876b58f51d6afeb53d9d76655524c9b108801c628524e8"} Jan 25 00:27:09 crc kubenswrapper[4947]: I0125 00:27:09.701969 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-1-build" podStartSLOduration=3.701948 podStartE2EDuration="3.701948s" podCreationTimestamp="2026-01-25 00:27:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:27:09.696570791 +0000 UTC m=+1068.929561241" watchObservedRunningTime="2026-01-25 00:27:09.701948 +0000 UTC m=+1068.934938460" Jan 25 00:27:17 crc kubenswrapper[4947]: I0125 00:27:17.073657 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:27:17 crc kubenswrapper[4947]: I0125 00:27:17.074766 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:27:17 crc kubenswrapper[4947]: I0125 00:27:17.567703 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 25 00:27:17 crc kubenswrapper[4947]: I0125 00:27:17.568040 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/sg-core-1-build" podUID="3b32a95f-4c19-445e-b87b-cbe8bc5f201a" containerName="docker-build" containerID="cri-o://269062e4b582f0e1ad876b58f51d6afeb53d9d76655524c9b108801c628524e8" gracePeriod=30 Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.577470 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_3b32a95f-4c19-445e-b87b-cbe8bc5f201a/docker-build/0.log" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.578846 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.708266 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-ca-bundles\") pod \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.708385 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbnkq\" (UniqueName: \"kubernetes.io/projected/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-kube-api-access-sbnkq\") pod \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.708485 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-container-storage-root\") pod \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.708574 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-node-pullsecrets\") pod \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.708616 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-builder-dockercfg-jd52j-pull\") pod \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.708662 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-system-configs\") pod \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.708681 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "3b32a95f-4c19-445e-b87b-cbe8bc5f201a" (UID: "3b32a95f-4c19-445e-b87b-cbe8bc5f201a"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.708710 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-proxy-ca-bundles\") pod \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.708831 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-blob-cache\") pod \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.708885 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-buildworkdir\") pod \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.708932 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-builder-dockercfg-jd52j-push\") pod \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.708998 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-container-storage-run\") pod \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.709040 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-buildcachedir\") pod \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.709649 4947 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.709643 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "3b32a95f-4c19-445e-b87b-cbe8bc5f201a" (UID: "3b32a95f-4c19-445e-b87b-cbe8bc5f201a"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.709694 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "3b32a95f-4c19-445e-b87b-cbe8bc5f201a" (UID: "3b32a95f-4c19-445e-b87b-cbe8bc5f201a"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.709821 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "3b32a95f-4c19-445e-b87b-cbe8bc5f201a" (UID: "3b32a95f-4c19-445e-b87b-cbe8bc5f201a"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.710599 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "3b32a95f-4c19-445e-b87b-cbe8bc5f201a" (UID: "3b32a95f-4c19-445e-b87b-cbe8bc5f201a"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.710867 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "3b32a95f-4c19-445e-b87b-cbe8bc5f201a" (UID: "3b32a95f-4c19-445e-b87b-cbe8bc5f201a"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.711352 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "3b32a95f-4c19-445e-b87b-cbe8bc5f201a" (UID: "3b32a95f-4c19-445e-b87b-cbe8bc5f201a"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.716236 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-builder-dockercfg-jd52j-pull" (OuterVolumeSpecName: "builder-dockercfg-jd52j-pull") pod "3b32a95f-4c19-445e-b87b-cbe8bc5f201a" (UID: "3b32a95f-4c19-445e-b87b-cbe8bc5f201a"). InnerVolumeSpecName "builder-dockercfg-jd52j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.725506 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-kube-api-access-sbnkq" (OuterVolumeSpecName: "kube-api-access-sbnkq") pod "3b32a95f-4c19-445e-b87b-cbe8bc5f201a" (UID: "3b32a95f-4c19-445e-b87b-cbe8bc5f201a"). InnerVolumeSpecName "kube-api-access-sbnkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.725523 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-builder-dockercfg-jd52j-push" (OuterVolumeSpecName: "builder-dockercfg-jd52j-push") pod "3b32a95f-4c19-445e-b87b-cbe8bc5f201a" (UID: "3b32a95f-4c19-445e-b87b-cbe8bc5f201a"). InnerVolumeSpecName "builder-dockercfg-jd52j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.740543 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_3b32a95f-4c19-445e-b87b-cbe8bc5f201a/docker-build/0.log" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.741273 4947 generic.go:334] "Generic (PLEG): container finished" podID="3b32a95f-4c19-445e-b87b-cbe8bc5f201a" containerID="269062e4b582f0e1ad876b58f51d6afeb53d9d76655524c9b108801c628524e8" exitCode=1 Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.741419 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"3b32a95f-4c19-445e-b87b-cbe8bc5f201a","Type":"ContainerDied","Data":"269062e4b582f0e1ad876b58f51d6afeb53d9d76655524c9b108801c628524e8"} Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.741592 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"3b32a95f-4c19-445e-b87b-cbe8bc5f201a","Type":"ContainerDied","Data":"1a28b2860ddc19f895a721896df139ceca9edbb20b78b9c62eb78485a1f6d3b6"} Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.741716 4947 scope.go:117] "RemoveContainer" containerID="269062e4b582f0e1ad876b58f51d6afeb53d9d76655524c9b108801c628524e8" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.741980 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.811308 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.811650 4947 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.811661 4947 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.811669 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbnkq\" (UniqueName: \"kubernetes.io/projected/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-kube-api-access-sbnkq\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.811680 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-builder-dockercfg-jd52j-pull\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.811689 4947 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.811699 4947 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.811709 4947 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.811721 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-builder-dockercfg-jd52j-push\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.817720 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "3b32a95f-4c19-445e-b87b-cbe8bc5f201a" (UID: "3b32a95f-4c19-445e-b87b-cbe8bc5f201a"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.841731 4947 scope.go:117] "RemoveContainer" containerID="4ec22a781a4580f96f8bf5e0deaacc392abbbdc89cea199a4e3ae120a1042ef0" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.854084 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "3b32a95f-4c19-445e-b87b-cbe8bc5f201a" (UID: "3b32a95f-4c19-445e-b87b-cbe8bc5f201a"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.878801 4947 scope.go:117] "RemoveContainer" containerID="269062e4b582f0e1ad876b58f51d6afeb53d9d76655524c9b108801c628524e8" Jan 25 00:27:18 crc kubenswrapper[4947]: E0125 00:27:18.879275 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"269062e4b582f0e1ad876b58f51d6afeb53d9d76655524c9b108801c628524e8\": container with ID starting with 269062e4b582f0e1ad876b58f51d6afeb53d9d76655524c9b108801c628524e8 not found: ID does not exist" containerID="269062e4b582f0e1ad876b58f51d6afeb53d9d76655524c9b108801c628524e8" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.879313 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"269062e4b582f0e1ad876b58f51d6afeb53d9d76655524c9b108801c628524e8"} err="failed to get container status \"269062e4b582f0e1ad876b58f51d6afeb53d9d76655524c9b108801c628524e8\": rpc error: code = NotFound desc = could not find container \"269062e4b582f0e1ad876b58f51d6afeb53d9d76655524c9b108801c628524e8\": container with ID starting with 269062e4b582f0e1ad876b58f51d6afeb53d9d76655524c9b108801c628524e8 not found: ID does not exist" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.879339 4947 scope.go:117] "RemoveContainer" containerID="4ec22a781a4580f96f8bf5e0deaacc392abbbdc89cea199a4e3ae120a1042ef0" Jan 25 00:27:18 crc kubenswrapper[4947]: E0125 00:27:18.879791 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ec22a781a4580f96f8bf5e0deaacc392abbbdc89cea199a4e3ae120a1042ef0\": container with ID starting with 4ec22a781a4580f96f8bf5e0deaacc392abbbdc89cea199a4e3ae120a1042ef0 not found: ID does not exist" containerID="4ec22a781a4580f96f8bf5e0deaacc392abbbdc89cea199a4e3ae120a1042ef0" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.879822 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ec22a781a4580f96f8bf5e0deaacc392abbbdc89cea199a4e3ae120a1042ef0"} err="failed to get container status \"4ec22a781a4580f96f8bf5e0deaacc392abbbdc89cea199a4e3ae120a1042ef0\": rpc error: code = NotFound desc = could not find container \"4ec22a781a4580f96f8bf5e0deaacc392abbbdc89cea199a4e3ae120a1042ef0\": container with ID starting with 4ec22a781a4580f96f8bf5e0deaacc392abbbdc89cea199a4e3ae120a1042ef0 not found: ID does not exist" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.913688 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.913753 4947 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.077084 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.083968 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.100067 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b32a95f-4c19-445e-b87b-cbe8bc5f201a" path="/var/lib/kubelet/pods/3b32a95f-4c19-445e-b87b-cbe8bc5f201a/volumes" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.181245 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 25 00:27:19 crc kubenswrapper[4947]: E0125 00:27:19.181586 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b32a95f-4c19-445e-b87b-cbe8bc5f201a" containerName="manage-dockerfile" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.181611 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b32a95f-4c19-445e-b87b-cbe8bc5f201a" containerName="manage-dockerfile" Jan 25 00:27:19 crc kubenswrapper[4947]: E0125 00:27:19.181636 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b32a95f-4c19-445e-b87b-cbe8bc5f201a" containerName="docker-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.181646 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b32a95f-4c19-445e-b87b-cbe8bc5f201a" containerName="docker-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.181785 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b32a95f-4c19-445e-b87b-cbe8bc5f201a" containerName="docker-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.182880 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.185369 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-ca" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.185765 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-jd52j" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.186084 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-global-ca" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.186335 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-sys-config" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.213475 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.320410 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/77af6791-c204-477c-b362-ce322fd18448-builder-dockercfg-jd52j-push\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.320468 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-buildworkdir\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.320501 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77af6791-c204-477c-b362-ce322fd18448-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.320526 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-container-storage-root\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.320552 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/77af6791-c204-477c-b362-ce322fd18448-builder-dockercfg-jd52j-pull\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.320572 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-container-storage-run\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.320603 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.320619 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z46q6\" (UniqueName: \"kubernetes.io/projected/77af6791-c204-477c-b362-ce322fd18448-kube-api-access-z46q6\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.320649 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/77af6791-c204-477c-b362-ce322fd18448-buildcachedir\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.320675 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/77af6791-c204-477c-b362-ce322fd18448-build-system-configs\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.320698 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/77af6791-c204-477c-b362-ce322fd18448-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.320716 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77af6791-c204-477c-b362-ce322fd18448-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.426105 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-buildworkdir\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.426204 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77af6791-c204-477c-b362-ce322fd18448-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.426276 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-container-storage-root\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.426334 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/77af6791-c204-477c-b362-ce322fd18448-builder-dockercfg-jd52j-pull\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.426370 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-container-storage-run\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.426444 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.426503 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z46q6\" (UniqueName: \"kubernetes.io/projected/77af6791-c204-477c-b362-ce322fd18448-kube-api-access-z46q6\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.426577 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/77af6791-c204-477c-b362-ce322fd18448-buildcachedir\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.426650 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/77af6791-c204-477c-b362-ce322fd18448-build-system-configs\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.426684 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/77af6791-c204-477c-b362-ce322fd18448-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.426745 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77af6791-c204-477c-b362-ce322fd18448-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.426853 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/77af6791-c204-477c-b362-ce322fd18448-builder-dockercfg-jd52j-push\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.426912 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-buildworkdir\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.427090 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/77af6791-c204-477c-b362-ce322fd18448-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.427168 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/77af6791-c204-477c-b362-ce322fd18448-buildcachedir\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.427299 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-container-storage-run\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.427328 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-container-storage-root\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.427379 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/77af6791-c204-477c-b362-ce322fd18448-build-system-configs\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.427575 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.428090 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77af6791-c204-477c-b362-ce322fd18448-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.431418 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77af6791-c204-477c-b362-ce322fd18448-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.438641 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/77af6791-c204-477c-b362-ce322fd18448-builder-dockercfg-jd52j-pull\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.438732 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/77af6791-c204-477c-b362-ce322fd18448-builder-dockercfg-jd52j-push\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.455738 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z46q6\" (UniqueName: \"kubernetes.io/projected/77af6791-c204-477c-b362-ce322fd18448-kube-api-access-z46q6\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.500068 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.937821 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 25 00:27:19 crc kubenswrapper[4947]: W0125 00:27:19.941327 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77af6791_c204_477c_b362_ce322fd18448.slice/crio-0759e43021908768d14063019ea0f5cf22f849b7ea0a3f13a5fc0eeba338e9be WatchSource:0}: Error finding container 0759e43021908768d14063019ea0f5cf22f849b7ea0a3f13a5fc0eeba338e9be: Status 404 returned error can't find the container with id 0759e43021908768d14063019ea0f5cf22f849b7ea0a3f13a5fc0eeba338e9be Jan 25 00:27:20 crc kubenswrapper[4947]: I0125 00:27:20.765833 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"77af6791-c204-477c-b362-ce322fd18448","Type":"ContainerStarted","Data":"53cf344b52518198c3aa0c5dfedb17a9bae990fd26020258415ae4dd5eadf484"} Jan 25 00:27:20 crc kubenswrapper[4947]: I0125 00:27:20.766375 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"77af6791-c204-477c-b362-ce322fd18448","Type":"ContainerStarted","Data":"0759e43021908768d14063019ea0f5cf22f849b7ea0a3f13a5fc0eeba338e9be"} Jan 25 00:27:21 crc kubenswrapper[4947]: I0125 00:27:21.773433 4947 generic.go:334] "Generic (PLEG): container finished" podID="77af6791-c204-477c-b362-ce322fd18448" containerID="53cf344b52518198c3aa0c5dfedb17a9bae990fd26020258415ae4dd5eadf484" exitCode=0 Jan 25 00:27:21 crc kubenswrapper[4947]: I0125 00:27:21.773545 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"77af6791-c204-477c-b362-ce322fd18448","Type":"ContainerDied","Data":"53cf344b52518198c3aa0c5dfedb17a9bae990fd26020258415ae4dd5eadf484"} Jan 25 00:27:22 crc kubenswrapper[4947]: I0125 00:27:22.783296 4947 generic.go:334] "Generic (PLEG): container finished" podID="77af6791-c204-477c-b362-ce322fd18448" containerID="ad26ed26fc6645ed7ca740ab7344045c7f2bab95a3573f26741ffbd26f39128b" exitCode=0 Jan 25 00:27:22 crc kubenswrapper[4947]: I0125 00:27:22.783362 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"77af6791-c204-477c-b362-ce322fd18448","Type":"ContainerDied","Data":"ad26ed26fc6645ed7ca740ab7344045c7f2bab95a3573f26741ffbd26f39128b"} Jan 25 00:27:22 crc kubenswrapper[4947]: I0125 00:27:22.834993 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_77af6791-c204-477c-b362-ce322fd18448/manage-dockerfile/0.log" Jan 25 00:27:23 crc kubenswrapper[4947]: I0125 00:27:23.797491 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"77af6791-c204-477c-b362-ce322fd18448","Type":"ContainerStarted","Data":"926b6653f0edfbf0c39f622f9fe1adc864ba4c8a2544121528ab9b4b3a607201"} Jan 25 00:27:23 crc kubenswrapper[4947]: I0125 00:27:23.824284 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-2-build" podStartSLOduration=4.824258437 podStartE2EDuration="4.824258437s" podCreationTimestamp="2026-01-25 00:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:27:23.821540392 +0000 UTC m=+1083.054530842" watchObservedRunningTime="2026-01-25 00:27:23.824258437 +0000 UTC m=+1083.057248897" Jan 25 00:27:47 crc kubenswrapper[4947]: I0125 00:27:47.072375 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:27:47 crc kubenswrapper[4947]: I0125 00:27:47.072917 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:28:17 crc kubenswrapper[4947]: I0125 00:28:17.072281 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:28:17 crc kubenswrapper[4947]: I0125 00:28:17.072808 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:28:17 crc kubenswrapper[4947]: I0125 00:28:17.072872 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:28:17 crc kubenswrapper[4947]: I0125 00:28:17.073516 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"16e9b3158d1aebf618ce87b5aa5158940b299b1ba44d29893c13b023abe239b1"} pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 25 00:28:17 crc kubenswrapper[4947]: I0125 00:28:17.073581 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" containerID="cri-o://16e9b3158d1aebf618ce87b5aa5158940b299b1ba44d29893c13b023abe239b1" gracePeriod=600 Jan 25 00:28:25 crc kubenswrapper[4947]: I0125 00:28:25.428455 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-daemon-mdgrh_5f67ec28-baae-409e-a42d-03a486e7a26b/machine-config-daemon/5.log" Jan 25 00:28:25 crc kubenswrapper[4947]: I0125 00:28:25.433100 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerID="16e9b3158d1aebf618ce87b5aa5158940b299b1ba44d29893c13b023abe239b1" exitCode=-1 Jan 25 00:28:25 crc kubenswrapper[4947]: I0125 00:28:25.433169 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerDied","Data":"16e9b3158d1aebf618ce87b5aa5158940b299b1ba44d29893c13b023abe239b1"} Jan 25 00:28:25 crc kubenswrapper[4947]: I0125 00:28:25.433205 4947 scope.go:117] "RemoveContainer" containerID="3d54be0f7cb7c92bd5d7a293f8c1b17de01e73a4d2768de922b6d4f49ead1a39" Jan 25 00:28:26 crc kubenswrapper[4947]: I0125 00:28:26.442036 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerStarted","Data":"e5fe371d16c66e0e03d5287fecc388949acddf08b1c19945fd233890f245ebb7"} Jan 25 00:29:28 crc kubenswrapper[4947]: I0125 00:29:28.502075 4947 scope.go:117] "RemoveContainer" containerID="29e25cd653e5646a037201713219653ddf735f00af3b32cacab158f9791bc90a" Jan 25 00:29:28 crc kubenswrapper[4947]: I0125 00:29:28.551725 4947 scope.go:117] "RemoveContainer" containerID="42d0401113a9cc459f1654c1224909ba635689103f9bf3566db1b01252300549" Jan 25 00:30:00 crc kubenswrapper[4947]: I0125 00:30:00.141207 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488350-fm7px"] Jan 25 00:30:00 crc kubenswrapper[4947]: I0125 00:30:00.142394 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488350-fm7px" Jan 25 00:30:00 crc kubenswrapper[4947]: I0125 00:30:00.145351 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 25 00:30:00 crc kubenswrapper[4947]: I0125 00:30:00.151008 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 25 00:30:00 crc kubenswrapper[4947]: I0125 00:30:00.161815 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488350-fm7px"] Jan 25 00:30:00 crc kubenswrapper[4947]: I0125 00:30:00.294834 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee769c6a-9981-4818-8ae5-842f6937caec-config-volume\") pod \"collect-profiles-29488350-fm7px\" (UID: \"ee769c6a-9981-4818-8ae5-842f6937caec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488350-fm7px" Jan 25 00:30:00 crc kubenswrapper[4947]: I0125 00:30:00.294892 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svvn4\" (UniqueName: \"kubernetes.io/projected/ee769c6a-9981-4818-8ae5-842f6937caec-kube-api-access-svvn4\") pod \"collect-profiles-29488350-fm7px\" (UID: \"ee769c6a-9981-4818-8ae5-842f6937caec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488350-fm7px" Jan 25 00:30:00 crc kubenswrapper[4947]: I0125 00:30:00.294930 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee769c6a-9981-4818-8ae5-842f6937caec-secret-volume\") pod \"collect-profiles-29488350-fm7px\" (UID: \"ee769c6a-9981-4818-8ae5-842f6937caec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488350-fm7px" Jan 25 00:30:00 crc kubenswrapper[4947]: I0125 00:30:00.395967 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee769c6a-9981-4818-8ae5-842f6937caec-config-volume\") pod \"collect-profiles-29488350-fm7px\" (UID: \"ee769c6a-9981-4818-8ae5-842f6937caec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488350-fm7px" Jan 25 00:30:00 crc kubenswrapper[4947]: I0125 00:30:00.396022 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svvn4\" (UniqueName: \"kubernetes.io/projected/ee769c6a-9981-4818-8ae5-842f6937caec-kube-api-access-svvn4\") pod \"collect-profiles-29488350-fm7px\" (UID: \"ee769c6a-9981-4818-8ae5-842f6937caec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488350-fm7px" Jan 25 00:30:00 crc kubenswrapper[4947]: I0125 00:30:00.396056 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee769c6a-9981-4818-8ae5-842f6937caec-secret-volume\") pod \"collect-profiles-29488350-fm7px\" (UID: \"ee769c6a-9981-4818-8ae5-842f6937caec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488350-fm7px" Jan 25 00:30:00 crc kubenswrapper[4947]: I0125 00:30:00.396750 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee769c6a-9981-4818-8ae5-842f6937caec-config-volume\") pod \"collect-profiles-29488350-fm7px\" (UID: \"ee769c6a-9981-4818-8ae5-842f6937caec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488350-fm7px" Jan 25 00:30:00 crc kubenswrapper[4947]: I0125 00:30:00.402381 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee769c6a-9981-4818-8ae5-842f6937caec-secret-volume\") pod \"collect-profiles-29488350-fm7px\" (UID: \"ee769c6a-9981-4818-8ae5-842f6937caec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488350-fm7px" Jan 25 00:30:00 crc kubenswrapper[4947]: I0125 00:30:00.427895 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svvn4\" (UniqueName: \"kubernetes.io/projected/ee769c6a-9981-4818-8ae5-842f6937caec-kube-api-access-svvn4\") pod \"collect-profiles-29488350-fm7px\" (UID: \"ee769c6a-9981-4818-8ae5-842f6937caec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488350-fm7px" Jan 25 00:30:00 crc kubenswrapper[4947]: I0125 00:30:00.460956 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488350-fm7px" Jan 25 00:30:00 crc kubenswrapper[4947]: I0125 00:30:00.929470 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488350-fm7px"] Jan 25 00:30:01 crc kubenswrapper[4947]: I0125 00:30:01.417321 4947 generic.go:334] "Generic (PLEG): container finished" podID="ee769c6a-9981-4818-8ae5-842f6937caec" containerID="93fcc613c9ba4599c5f6015e7f766475a34b2493629af138060b766f5b39aba0" exitCode=0 Jan 25 00:30:01 crc kubenswrapper[4947]: I0125 00:30:01.417432 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488350-fm7px" event={"ID":"ee769c6a-9981-4818-8ae5-842f6937caec","Type":"ContainerDied","Data":"93fcc613c9ba4599c5f6015e7f766475a34b2493629af138060b766f5b39aba0"} Jan 25 00:30:01 crc kubenswrapper[4947]: I0125 00:30:01.417641 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488350-fm7px" event={"ID":"ee769c6a-9981-4818-8ae5-842f6937caec","Type":"ContainerStarted","Data":"f851f5b340aac89ef44e8051e40a845b0c9d4c36d60825cd7f7839fec0cbd98f"} Jan 25 00:30:02 crc kubenswrapper[4947]: I0125 00:30:02.671365 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488350-fm7px" Jan 25 00:30:02 crc kubenswrapper[4947]: I0125 00:30:02.830010 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee769c6a-9981-4818-8ae5-842f6937caec-secret-volume\") pod \"ee769c6a-9981-4818-8ae5-842f6937caec\" (UID: \"ee769c6a-9981-4818-8ae5-842f6937caec\") " Jan 25 00:30:02 crc kubenswrapper[4947]: I0125 00:30:02.830090 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svvn4\" (UniqueName: \"kubernetes.io/projected/ee769c6a-9981-4818-8ae5-842f6937caec-kube-api-access-svvn4\") pod \"ee769c6a-9981-4818-8ae5-842f6937caec\" (UID: \"ee769c6a-9981-4818-8ae5-842f6937caec\") " Jan 25 00:30:02 crc kubenswrapper[4947]: I0125 00:30:02.830170 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee769c6a-9981-4818-8ae5-842f6937caec-config-volume\") pod \"ee769c6a-9981-4818-8ae5-842f6937caec\" (UID: \"ee769c6a-9981-4818-8ae5-842f6937caec\") " Jan 25 00:30:02 crc kubenswrapper[4947]: I0125 00:30:02.831074 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee769c6a-9981-4818-8ae5-842f6937caec-config-volume" (OuterVolumeSpecName: "config-volume") pod "ee769c6a-9981-4818-8ae5-842f6937caec" (UID: "ee769c6a-9981-4818-8ae5-842f6937caec"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:30:02 crc kubenswrapper[4947]: I0125 00:30:02.835576 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee769c6a-9981-4818-8ae5-842f6937caec-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ee769c6a-9981-4818-8ae5-842f6937caec" (UID: "ee769c6a-9981-4818-8ae5-842f6937caec"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:30:02 crc kubenswrapper[4947]: I0125 00:30:02.835612 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee769c6a-9981-4818-8ae5-842f6937caec-kube-api-access-svvn4" (OuterVolumeSpecName: "kube-api-access-svvn4") pod "ee769c6a-9981-4818-8ae5-842f6937caec" (UID: "ee769c6a-9981-4818-8ae5-842f6937caec"). InnerVolumeSpecName "kube-api-access-svvn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:30:02 crc kubenswrapper[4947]: I0125 00:30:02.932054 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svvn4\" (UniqueName: \"kubernetes.io/projected/ee769c6a-9981-4818-8ae5-842f6937caec-kube-api-access-svvn4\") on node \"crc\" DevicePath \"\"" Jan 25 00:30:02 crc kubenswrapper[4947]: I0125 00:30:02.932181 4947 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee769c6a-9981-4818-8ae5-842f6937caec-config-volume\") on node \"crc\" DevicePath \"\"" Jan 25 00:30:02 crc kubenswrapper[4947]: I0125 00:30:02.932203 4947 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee769c6a-9981-4818-8ae5-842f6937caec-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 25 00:30:03 crc kubenswrapper[4947]: I0125 00:30:03.430913 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488350-fm7px" event={"ID":"ee769c6a-9981-4818-8ae5-842f6937caec","Type":"ContainerDied","Data":"f851f5b340aac89ef44e8051e40a845b0c9d4c36d60825cd7f7839fec0cbd98f"} Jan 25 00:30:03 crc kubenswrapper[4947]: I0125 00:30:03.430962 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f851f5b340aac89ef44e8051e40a845b0c9d4c36d60825cd7f7839fec0cbd98f" Jan 25 00:30:03 crc kubenswrapper[4947]: I0125 00:30:03.431009 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488350-fm7px" Jan 25 00:30:47 crc kubenswrapper[4947]: I0125 00:30:47.073338 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:30:47 crc kubenswrapper[4947]: I0125 00:30:47.074263 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:31:16 crc kubenswrapper[4947]: I0125 00:31:16.967892 4947 generic.go:334] "Generic (PLEG): container finished" podID="77af6791-c204-477c-b362-ce322fd18448" containerID="926b6653f0edfbf0c39f622f9fe1adc864ba4c8a2544121528ab9b4b3a607201" exitCode=0 Jan 25 00:31:16 crc kubenswrapper[4947]: I0125 00:31:16.968013 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"77af6791-c204-477c-b362-ce322fd18448","Type":"ContainerDied","Data":"926b6653f0edfbf0c39f622f9fe1adc864ba4c8a2544121528ab9b4b3a607201"} Jan 25 00:31:17 crc kubenswrapper[4947]: I0125 00:31:17.073389 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:31:17 crc kubenswrapper[4947]: I0125 00:31:17.074265 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.273865 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.314646 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-container-storage-root\") pod \"77af6791-c204-477c-b362-ce322fd18448\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.314721 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77af6791-c204-477c-b362-ce322fd18448-build-proxy-ca-bundles\") pod \"77af6791-c204-477c-b362-ce322fd18448\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.314746 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/77af6791-c204-477c-b362-ce322fd18448-builder-dockercfg-jd52j-pull\") pod \"77af6791-c204-477c-b362-ce322fd18448\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.314779 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-build-blob-cache\") pod \"77af6791-c204-477c-b362-ce322fd18448\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.314829 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77af6791-c204-477c-b362-ce322fd18448-build-ca-bundles\") pod \"77af6791-c204-477c-b362-ce322fd18448\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.314851 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z46q6\" (UniqueName: \"kubernetes.io/projected/77af6791-c204-477c-b362-ce322fd18448-kube-api-access-z46q6\") pod \"77af6791-c204-477c-b362-ce322fd18448\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.314915 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/77af6791-c204-477c-b362-ce322fd18448-buildcachedir\") pod \"77af6791-c204-477c-b362-ce322fd18448\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.314949 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/77af6791-c204-477c-b362-ce322fd18448-builder-dockercfg-jd52j-push\") pod \"77af6791-c204-477c-b362-ce322fd18448\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.315001 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/77af6791-c204-477c-b362-ce322fd18448-build-system-configs\") pod \"77af6791-c204-477c-b362-ce322fd18448\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.315030 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-container-storage-run\") pod \"77af6791-c204-477c-b362-ce322fd18448\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.315061 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-buildworkdir\") pod \"77af6791-c204-477c-b362-ce322fd18448\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.315089 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/77af6791-c204-477c-b362-ce322fd18448-node-pullsecrets\") pod \"77af6791-c204-477c-b362-ce322fd18448\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.315398 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77af6791-c204-477c-b362-ce322fd18448-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "77af6791-c204-477c-b362-ce322fd18448" (UID: "77af6791-c204-477c-b362-ce322fd18448"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.315539 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77af6791-c204-477c-b362-ce322fd18448-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "77af6791-c204-477c-b362-ce322fd18448" (UID: "77af6791-c204-477c-b362-ce322fd18448"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.316756 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77af6791-c204-477c-b362-ce322fd18448-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "77af6791-c204-477c-b362-ce322fd18448" (UID: "77af6791-c204-477c-b362-ce322fd18448"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.316872 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77af6791-c204-477c-b362-ce322fd18448-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "77af6791-c204-477c-b362-ce322fd18448" (UID: "77af6791-c204-477c-b362-ce322fd18448"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.319780 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77af6791-c204-477c-b362-ce322fd18448-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "77af6791-c204-477c-b362-ce322fd18448" (UID: "77af6791-c204-477c-b362-ce322fd18448"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.327771 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77af6791-c204-477c-b362-ce322fd18448-builder-dockercfg-jd52j-pull" (OuterVolumeSpecName: "builder-dockercfg-jd52j-pull") pod "77af6791-c204-477c-b362-ce322fd18448" (UID: "77af6791-c204-477c-b362-ce322fd18448"). InnerVolumeSpecName "builder-dockercfg-jd52j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.331488 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77af6791-c204-477c-b362-ce322fd18448-kube-api-access-z46q6" (OuterVolumeSpecName: "kube-api-access-z46q6") pod "77af6791-c204-477c-b362-ce322fd18448" (UID: "77af6791-c204-477c-b362-ce322fd18448"). InnerVolumeSpecName "kube-api-access-z46q6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.332542 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "77af6791-c204-477c-b362-ce322fd18448" (UID: "77af6791-c204-477c-b362-ce322fd18448"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.333182 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77af6791-c204-477c-b362-ce322fd18448-builder-dockercfg-jd52j-push" (OuterVolumeSpecName: "builder-dockercfg-jd52j-push") pod "77af6791-c204-477c-b362-ce322fd18448" (UID: "77af6791-c204-477c-b362-ce322fd18448"). InnerVolumeSpecName "builder-dockercfg-jd52j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.339819 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "77af6791-c204-477c-b362-ce322fd18448" (UID: "77af6791-c204-477c-b362-ce322fd18448"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.417100 4947 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/77af6791-c204-477c-b362-ce322fd18448-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.417210 4947 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77af6791-c204-477c-b362-ce322fd18448-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.417235 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/77af6791-c204-477c-b362-ce322fd18448-builder-dockercfg-jd52j-pull\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.417252 4947 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77af6791-c204-477c-b362-ce322fd18448-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.417269 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z46q6\" (UniqueName: \"kubernetes.io/projected/77af6791-c204-477c-b362-ce322fd18448-kube-api-access-z46q6\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.417281 4947 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/77af6791-c204-477c-b362-ce322fd18448-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.417293 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/77af6791-c204-477c-b362-ce322fd18448-builder-dockercfg-jd52j-push\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.417305 4947 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/77af6791-c204-477c-b362-ce322fd18448-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.417317 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.417329 4947 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.694007 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "77af6791-c204-477c-b362-ce322fd18448" (UID: "77af6791-c204-477c-b362-ce322fd18448"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.721783 4947 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.990082 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"77af6791-c204-477c-b362-ce322fd18448","Type":"ContainerDied","Data":"0759e43021908768d14063019ea0f5cf22f849b7ea0a3f13a5fc0eeba338e9be"} Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.990182 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0759e43021908768d14063019ea0f5cf22f849b7ea0a3f13a5fc0eeba338e9be" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.990316 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 25 00:31:21 crc kubenswrapper[4947]: I0125 00:31:21.146420 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "77af6791-c204-477c-b362-ce322fd18448" (UID: "77af6791-c204-477c-b362-ce322fd18448"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:31:21 crc kubenswrapper[4947]: I0125 00:31:21.150930 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.088045 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 25 00:31:23 crc kubenswrapper[4947]: E0125 00:31:23.088324 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77af6791-c204-477c-b362-ce322fd18448" containerName="docker-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.088339 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="77af6791-c204-477c-b362-ce322fd18448" containerName="docker-build" Jan 25 00:31:23 crc kubenswrapper[4947]: E0125 00:31:23.088350 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee769c6a-9981-4818-8ae5-842f6937caec" containerName="collect-profiles" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.088358 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee769c6a-9981-4818-8ae5-842f6937caec" containerName="collect-profiles" Jan 25 00:31:23 crc kubenswrapper[4947]: E0125 00:31:23.088370 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77af6791-c204-477c-b362-ce322fd18448" containerName="git-clone" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.088378 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="77af6791-c204-477c-b362-ce322fd18448" containerName="git-clone" Jan 25 00:31:23 crc kubenswrapper[4947]: E0125 00:31:23.088392 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77af6791-c204-477c-b362-ce322fd18448" containerName="manage-dockerfile" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.088400 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="77af6791-c204-477c-b362-ce322fd18448" containerName="manage-dockerfile" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.088546 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee769c6a-9981-4818-8ae5-842f6937caec" containerName="collect-profiles" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.088559 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="77af6791-c204-477c-b362-ce322fd18448" containerName="docker-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.089324 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.102765 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-global-ca" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.102937 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-ca" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.103071 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-sys-config" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.103120 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-jd52j" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.113919 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.179166 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.179224 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.179266 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.179293 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.179413 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b66303d6-9f4a-401f-8dc6-5855a51b28cb-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.179548 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/b66303d6-9f4a-401f-8dc6-5855a51b28cb-builder-dockercfg-jd52j-pull\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.179626 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.179682 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.179727 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl4xl\" (UniqueName: \"kubernetes.io/projected/b66303d6-9f4a-401f-8dc6-5855a51b28cb-kube-api-access-sl4xl\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.179790 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/b66303d6-9f4a-401f-8dc6-5855a51b28cb-builder-dockercfg-jd52j-push\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.179947 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.180000 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b66303d6-9f4a-401f-8dc6-5855a51b28cb-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.280591 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/b66303d6-9f4a-401f-8dc6-5855a51b28cb-builder-dockercfg-jd52j-pull\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.280635 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.280661 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.280681 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl4xl\" (UniqueName: \"kubernetes.io/projected/b66303d6-9f4a-401f-8dc6-5855a51b28cb-kube-api-access-sl4xl\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.280697 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/b66303d6-9f4a-401f-8dc6-5855a51b28cb-builder-dockercfg-jd52j-push\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.281055 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.281090 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b66303d6-9f4a-401f-8dc6-5855a51b28cb-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.281450 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.281179 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.281391 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.281210 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b66303d6-9f4a-401f-8dc6-5855a51b28cb-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.281645 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.281659 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.281469 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.281722 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.281744 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.281777 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b66303d6-9f4a-401f-8dc6-5855a51b28cb-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.281795 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.281841 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b66303d6-9f4a-401f-8dc6-5855a51b28cb-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.282246 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.282258 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.289656 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/b66303d6-9f4a-401f-8dc6-5855a51b28cb-builder-dockercfg-jd52j-push\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.290051 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/b66303d6-9f4a-401f-8dc6-5855a51b28cb-builder-dockercfg-jd52j-pull\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.305379 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl4xl\" (UniqueName: \"kubernetes.io/projected/b66303d6-9f4a-401f-8dc6-5855a51b28cb-kube-api-access-sl4xl\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.421030 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.649211 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 25 00:31:24 crc kubenswrapper[4947]: I0125 00:31:24.021709 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"b66303d6-9f4a-401f-8dc6-5855a51b28cb","Type":"ContainerStarted","Data":"3b9968f7292e05181c1395dde57498d955080e1dcb49d9484e57992f06fed9b1"} Jan 25 00:31:24 crc kubenswrapper[4947]: I0125 00:31:24.021763 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"b66303d6-9f4a-401f-8dc6-5855a51b28cb","Type":"ContainerStarted","Data":"a586952b97453573c2d5734b96500c0e485fae2fd231d75cb06ecf54a70cb8b6"} Jan 25 00:31:25 crc kubenswrapper[4947]: I0125 00:31:25.033567 4947 generic.go:334] "Generic (PLEG): container finished" podID="b66303d6-9f4a-401f-8dc6-5855a51b28cb" containerID="3b9968f7292e05181c1395dde57498d955080e1dcb49d9484e57992f06fed9b1" exitCode=0 Jan 25 00:31:25 crc kubenswrapper[4947]: I0125 00:31:25.033633 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"b66303d6-9f4a-401f-8dc6-5855a51b28cb","Type":"ContainerDied","Data":"3b9968f7292e05181c1395dde57498d955080e1dcb49d9484e57992f06fed9b1"} Jan 25 00:31:26 crc kubenswrapper[4947]: I0125 00:31:26.042770 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"b66303d6-9f4a-401f-8dc6-5855a51b28cb","Type":"ContainerStarted","Data":"8bd3c19ebf87dfa9492f0b6c526fd93072c03b5bf0b404a53c130aa373ce7a49"} Jan 25 00:31:26 crc kubenswrapper[4947]: I0125 00:31:26.071764 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-1-build" podStartSLOduration=3.07174281 podStartE2EDuration="3.07174281s" podCreationTimestamp="2026-01-25 00:31:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:31:26.071660378 +0000 UTC m=+1325.304650838" watchObservedRunningTime="2026-01-25 00:31:26.07174281 +0000 UTC m=+1325.304733260" Jan 25 00:31:28 crc kubenswrapper[4947]: I0125 00:31:28.644612 4947 scope.go:117] "RemoveContainer" containerID="8859bd2c23cc381a47cfe2e04937484855d70dd800833fd36744559428ce5555" Jan 25 00:31:28 crc kubenswrapper[4947]: I0125 00:31:28.683870 4947 scope.go:117] "RemoveContainer" containerID="028bccab312ad8a58a8f5190d26bb368c53b9a76f922c33081e772b4e9d903bf" Jan 25 00:31:33 crc kubenswrapper[4947]: I0125 00:31:33.810246 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 25 00:31:33 crc kubenswrapper[4947]: I0125 00:31:33.812863 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/sg-bridge-1-build" podUID="b66303d6-9f4a-401f-8dc6-5855a51b28cb" containerName="docker-build" containerID="cri-o://8bd3c19ebf87dfa9492f0b6c526fd93072c03b5bf0b404a53c130aa373ce7a49" gracePeriod=30 Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.092605 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_b66303d6-9f4a-401f-8dc6-5855a51b28cb/docker-build/0.log" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.093330 4947 generic.go:334] "Generic (PLEG): container finished" podID="b66303d6-9f4a-401f-8dc6-5855a51b28cb" containerID="8bd3c19ebf87dfa9492f0b6c526fd93072c03b5bf0b404a53c130aa373ce7a49" exitCode=1 Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.093428 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"b66303d6-9f4a-401f-8dc6-5855a51b28cb","Type":"ContainerDied","Data":"8bd3c19ebf87dfa9492f0b6c526fd93072c03b5bf0b404a53c130aa373ce7a49"} Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.093480 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"b66303d6-9f4a-401f-8dc6-5855a51b28cb","Type":"ContainerDied","Data":"a586952b97453573c2d5734b96500c0e485fae2fd231d75cb06ecf54a70cb8b6"} Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.093500 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a586952b97453573c2d5734b96500c0e485fae2fd231d75cb06ecf54a70cb8b6" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.137397 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_b66303d6-9f4a-401f-8dc6-5855a51b28cb/docker-build/0.log" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.138219 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.255118 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sl4xl\" (UniqueName: \"kubernetes.io/projected/b66303d6-9f4a-401f-8dc6-5855a51b28cb-kube-api-access-sl4xl\") pod \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.255246 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-container-storage-run\") pod \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.255286 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-proxy-ca-bundles\") pod \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.255341 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-buildworkdir\") pod \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.255373 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/b66303d6-9f4a-401f-8dc6-5855a51b28cb-builder-dockercfg-jd52j-push\") pod \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.255410 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b66303d6-9f4a-401f-8dc6-5855a51b28cb-node-pullsecrets\") pod \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.255431 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-system-configs\") pod \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.255492 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b66303d6-9f4a-401f-8dc6-5855a51b28cb-buildcachedir\") pod \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.255522 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-ca-bundles\") pod \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.255556 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/b66303d6-9f4a-401f-8dc6-5855a51b28cb-builder-dockercfg-jd52j-pull\") pod \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.255582 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-blob-cache\") pod \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.255630 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-container-storage-root\") pod \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.256163 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "b66303d6-9f4a-401f-8dc6-5855a51b28cb" (UID: "b66303d6-9f4a-401f-8dc6-5855a51b28cb"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.256238 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b66303d6-9f4a-401f-8dc6-5855a51b28cb-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "b66303d6-9f4a-401f-8dc6-5855a51b28cb" (UID: "b66303d6-9f4a-401f-8dc6-5855a51b28cb"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.256430 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b66303d6-9f4a-401f-8dc6-5855a51b28cb-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "b66303d6-9f4a-401f-8dc6-5855a51b28cb" (UID: "b66303d6-9f4a-401f-8dc6-5855a51b28cb"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.256671 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "b66303d6-9f4a-401f-8dc6-5855a51b28cb" (UID: "b66303d6-9f4a-401f-8dc6-5855a51b28cb"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.256905 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "b66303d6-9f4a-401f-8dc6-5855a51b28cb" (UID: "b66303d6-9f4a-401f-8dc6-5855a51b28cb"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.256947 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "b66303d6-9f4a-401f-8dc6-5855a51b28cb" (UID: "b66303d6-9f4a-401f-8dc6-5855a51b28cb"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.257065 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "b66303d6-9f4a-401f-8dc6-5855a51b28cb" (UID: "b66303d6-9f4a-401f-8dc6-5855a51b28cb"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.263556 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b66303d6-9f4a-401f-8dc6-5855a51b28cb-kube-api-access-sl4xl" (OuterVolumeSpecName: "kube-api-access-sl4xl") pod "b66303d6-9f4a-401f-8dc6-5855a51b28cb" (UID: "b66303d6-9f4a-401f-8dc6-5855a51b28cb"). InnerVolumeSpecName "kube-api-access-sl4xl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.266554 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b66303d6-9f4a-401f-8dc6-5855a51b28cb-builder-dockercfg-jd52j-push" (OuterVolumeSpecName: "builder-dockercfg-jd52j-push") pod "b66303d6-9f4a-401f-8dc6-5855a51b28cb" (UID: "b66303d6-9f4a-401f-8dc6-5855a51b28cb"). InnerVolumeSpecName "builder-dockercfg-jd52j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.267180 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b66303d6-9f4a-401f-8dc6-5855a51b28cb-builder-dockercfg-jd52j-pull" (OuterVolumeSpecName: "builder-dockercfg-jd52j-pull") pod "b66303d6-9f4a-401f-8dc6-5855a51b28cb" (UID: "b66303d6-9f4a-401f-8dc6-5855a51b28cb"). InnerVolumeSpecName "builder-dockercfg-jd52j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.340952 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "b66303d6-9f4a-401f-8dc6-5855a51b28cb" (UID: "b66303d6-9f4a-401f-8dc6-5855a51b28cb"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.356782 4947 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b66303d6-9f4a-401f-8dc6-5855a51b28cb-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.356816 4947 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.356826 4947 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b66303d6-9f4a-401f-8dc6-5855a51b28cb-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.356834 4947 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.356843 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/b66303d6-9f4a-401f-8dc6-5855a51b28cb-builder-dockercfg-jd52j-pull\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.356852 4947 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.356860 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sl4xl\" (UniqueName: \"kubernetes.io/projected/b66303d6-9f4a-401f-8dc6-5855a51b28cb-kube-api-access-sl4xl\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.356868 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.356876 4947 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.356887 4947 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.356894 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/b66303d6-9f4a-401f-8dc6-5855a51b28cb-builder-dockercfg-jd52j-push\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.728122 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "b66303d6-9f4a-401f-8dc6-5855a51b28cb" (UID: "b66303d6-9f4a-401f-8dc6-5855a51b28cb"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.762853 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.097908 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.142198 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.147163 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.472755 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 25 00:31:35 crc kubenswrapper[4947]: E0125 00:31:35.473336 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b66303d6-9f4a-401f-8dc6-5855a51b28cb" containerName="docker-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.473439 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="b66303d6-9f4a-401f-8dc6-5855a51b28cb" containerName="docker-build" Jan 25 00:31:35 crc kubenswrapper[4947]: E0125 00:31:35.473528 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b66303d6-9f4a-401f-8dc6-5855a51b28cb" containerName="manage-dockerfile" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.473595 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="b66303d6-9f4a-401f-8dc6-5855a51b28cb" containerName="manage-dockerfile" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.473777 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="b66303d6-9f4a-401f-8dc6-5855a51b28cb" containerName="docker-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.474870 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.476348 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-global-ca" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.476504 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-jd52j" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.477049 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-sys-config" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.485290 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-ca" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.509298 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.571102 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.571415 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/5e48b97d-b6da-46e1-805a-1573652be38c-builder-dockercfg-jd52j-push\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.571528 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5e48b97d-b6da-46e1-805a-1573652be38c-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.571673 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e48b97d-b6da-46e1-805a-1573652be38c-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.571793 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/5e48b97d-b6da-46e1-805a-1573652be38c-builder-dockercfg-jd52j-pull\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.571837 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5rgf\" (UniqueName: \"kubernetes.io/projected/5e48b97d-b6da-46e1-805a-1573652be38c-kube-api-access-r5rgf\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.571887 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5e48b97d-b6da-46e1-805a-1573652be38c-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.571916 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.571945 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.571968 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.572024 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e48b97d-b6da-46e1-805a-1573652be38c-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.572048 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5e48b97d-b6da-46e1-805a-1573652be38c-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.672472 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.672789 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/5e48b97d-b6da-46e1-805a-1573652be38c-builder-dockercfg-jd52j-push\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.672910 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5e48b97d-b6da-46e1-805a-1573652be38c-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.673005 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5e48b97d-b6da-46e1-805a-1573652be38c-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.672959 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.673018 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e48b97d-b6da-46e1-805a-1573652be38c-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.673267 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/5e48b97d-b6da-46e1-805a-1573652be38c-builder-dockercfg-jd52j-pull\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.673380 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5rgf\" (UniqueName: \"kubernetes.io/projected/5e48b97d-b6da-46e1-805a-1573652be38c-kube-api-access-r5rgf\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.673476 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5e48b97d-b6da-46e1-805a-1573652be38c-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.673571 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.673663 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.673743 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e48b97d-b6da-46e1-805a-1573652be38c-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.673874 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.673992 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e48b97d-b6da-46e1-805a-1573652be38c-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.674087 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5e48b97d-b6da-46e1-805a-1573652be38c-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.674199 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.674368 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5e48b97d-b6da-46e1-805a-1573652be38c-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.674169 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5e48b97d-b6da-46e1-805a-1573652be38c-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.674716 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.674765 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.675907 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e48b97d-b6da-46e1-805a-1573652be38c-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.678631 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/5e48b97d-b6da-46e1-805a-1573652be38c-builder-dockercfg-jd52j-push\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.679534 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/5e48b97d-b6da-46e1-805a-1573652be38c-builder-dockercfg-jd52j-pull\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.692601 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5rgf\" (UniqueName: \"kubernetes.io/projected/5e48b97d-b6da-46e1-805a-1573652be38c-kube-api-access-r5rgf\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.790177 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:36 crc kubenswrapper[4947]: I0125 00:31:36.310648 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 25 00:31:37 crc kubenswrapper[4947]: I0125 00:31:37.097776 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b66303d6-9f4a-401f-8dc6-5855a51b28cb" path="/var/lib/kubelet/pods/b66303d6-9f4a-401f-8dc6-5855a51b28cb/volumes" Jan 25 00:31:37 crc kubenswrapper[4947]: I0125 00:31:37.109815 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"5e48b97d-b6da-46e1-805a-1573652be38c","Type":"ContainerStarted","Data":"a942688021affba03664d49fe678826c656d3e63d2d4c5a24dd21f78de3cfbb0"} Jan 25 00:31:37 crc kubenswrapper[4947]: I0125 00:31:37.109865 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"5e48b97d-b6da-46e1-805a-1573652be38c","Type":"ContainerStarted","Data":"41f32d7801129b67e6b416864000e59d2e61a76199bac60f5ea8a250a0e45fd8"} Jan 25 00:31:38 crc kubenswrapper[4947]: I0125 00:31:38.119766 4947 generic.go:334] "Generic (PLEG): container finished" podID="5e48b97d-b6da-46e1-805a-1573652be38c" containerID="a942688021affba03664d49fe678826c656d3e63d2d4c5a24dd21f78de3cfbb0" exitCode=0 Jan 25 00:31:38 crc kubenswrapper[4947]: I0125 00:31:38.119815 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"5e48b97d-b6da-46e1-805a-1573652be38c","Type":"ContainerDied","Data":"a942688021affba03664d49fe678826c656d3e63d2d4c5a24dd21f78de3cfbb0"} Jan 25 00:31:39 crc kubenswrapper[4947]: I0125 00:31:39.127397 4947 generic.go:334] "Generic (PLEG): container finished" podID="5e48b97d-b6da-46e1-805a-1573652be38c" containerID="e2b932a776e7707de5ef6e958eef570d3418be9ae6cecee7ae5b145fffaa715d" exitCode=0 Jan 25 00:31:39 crc kubenswrapper[4947]: I0125 00:31:39.127472 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"5e48b97d-b6da-46e1-805a-1573652be38c","Type":"ContainerDied","Data":"e2b932a776e7707de5ef6e958eef570d3418be9ae6cecee7ae5b145fffaa715d"} Jan 25 00:31:39 crc kubenswrapper[4947]: I0125 00:31:39.170182 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_5e48b97d-b6da-46e1-805a-1573652be38c/manage-dockerfile/0.log" Jan 25 00:31:40 crc kubenswrapper[4947]: I0125 00:31:40.147344 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"5e48b97d-b6da-46e1-805a-1573652be38c","Type":"ContainerStarted","Data":"31eed73ad1635b22a1b5134bc06b4c2144f4ba65ecd33dca5ba7c98cd031bcc9"} Jan 25 00:31:40 crc kubenswrapper[4947]: I0125 00:31:40.361517 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-2-build" podStartSLOduration=5.361487725 podStartE2EDuration="5.361487725s" podCreationTimestamp="2026-01-25 00:31:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:31:40.35307907 +0000 UTC m=+1339.586069550" watchObservedRunningTime="2026-01-25 00:31:40.361487725 +0000 UTC m=+1339.594478165" Jan 25 00:31:47 crc kubenswrapper[4947]: I0125 00:31:47.072239 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:31:47 crc kubenswrapper[4947]: I0125 00:31:47.072818 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:31:47 crc kubenswrapper[4947]: I0125 00:31:47.072866 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:31:47 crc kubenswrapper[4947]: I0125 00:31:47.073495 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e5fe371d16c66e0e03d5287fecc388949acddf08b1c19945fd233890f245ebb7"} pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 25 00:31:47 crc kubenswrapper[4947]: I0125 00:31:47.073559 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" containerID="cri-o://e5fe371d16c66e0e03d5287fecc388949acddf08b1c19945fd233890f245ebb7" gracePeriod=600 Jan 25 00:31:48 crc kubenswrapper[4947]: I0125 00:31:48.196693 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerID="e5fe371d16c66e0e03d5287fecc388949acddf08b1c19945fd233890f245ebb7" exitCode=0 Jan 25 00:31:48 crc kubenswrapper[4947]: I0125 00:31:48.196734 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerDied","Data":"e5fe371d16c66e0e03d5287fecc388949acddf08b1c19945fd233890f245ebb7"} Jan 25 00:31:48 crc kubenswrapper[4947]: I0125 00:31:48.197018 4947 scope.go:117] "RemoveContainer" containerID="16e9b3158d1aebf618ce87b5aa5158940b299b1ba44d29893c13b023abe239b1" Jan 25 00:31:49 crc kubenswrapper[4947]: I0125 00:31:49.204675 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerStarted","Data":"71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f"} Jan 25 00:32:29 crc kubenswrapper[4947]: I0125 00:32:29.454008 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-khvm7"] Jan 25 00:32:29 crc kubenswrapper[4947]: I0125 00:32:29.456258 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-khvm7" Jan 25 00:32:29 crc kubenswrapper[4947]: I0125 00:32:29.483087 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-khvm7"] Jan 25 00:32:29 crc kubenswrapper[4947]: I0125 00:32:29.617703 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af43bb2d-e763-416d-803d-16fec7332771-utilities\") pod \"certified-operators-khvm7\" (UID: \"af43bb2d-e763-416d-803d-16fec7332771\") " pod="openshift-marketplace/certified-operators-khvm7" Jan 25 00:32:29 crc kubenswrapper[4947]: I0125 00:32:29.617882 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lph5f\" (UniqueName: \"kubernetes.io/projected/af43bb2d-e763-416d-803d-16fec7332771-kube-api-access-lph5f\") pod \"certified-operators-khvm7\" (UID: \"af43bb2d-e763-416d-803d-16fec7332771\") " pod="openshift-marketplace/certified-operators-khvm7" Jan 25 00:32:29 crc kubenswrapper[4947]: I0125 00:32:29.617965 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af43bb2d-e763-416d-803d-16fec7332771-catalog-content\") pod \"certified-operators-khvm7\" (UID: \"af43bb2d-e763-416d-803d-16fec7332771\") " pod="openshift-marketplace/certified-operators-khvm7" Jan 25 00:32:29 crc kubenswrapper[4947]: I0125 00:32:29.719289 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af43bb2d-e763-416d-803d-16fec7332771-utilities\") pod \"certified-operators-khvm7\" (UID: \"af43bb2d-e763-416d-803d-16fec7332771\") " pod="openshift-marketplace/certified-operators-khvm7" Jan 25 00:32:29 crc kubenswrapper[4947]: I0125 00:32:29.719376 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lph5f\" (UniqueName: \"kubernetes.io/projected/af43bb2d-e763-416d-803d-16fec7332771-kube-api-access-lph5f\") pod \"certified-operators-khvm7\" (UID: \"af43bb2d-e763-416d-803d-16fec7332771\") " pod="openshift-marketplace/certified-operators-khvm7" Jan 25 00:32:29 crc kubenswrapper[4947]: I0125 00:32:29.719415 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af43bb2d-e763-416d-803d-16fec7332771-catalog-content\") pod \"certified-operators-khvm7\" (UID: \"af43bb2d-e763-416d-803d-16fec7332771\") " pod="openshift-marketplace/certified-operators-khvm7" Jan 25 00:32:29 crc kubenswrapper[4947]: I0125 00:32:29.720015 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af43bb2d-e763-416d-803d-16fec7332771-catalog-content\") pod \"certified-operators-khvm7\" (UID: \"af43bb2d-e763-416d-803d-16fec7332771\") " pod="openshift-marketplace/certified-operators-khvm7" Jan 25 00:32:29 crc kubenswrapper[4947]: I0125 00:32:29.720205 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af43bb2d-e763-416d-803d-16fec7332771-utilities\") pod \"certified-operators-khvm7\" (UID: \"af43bb2d-e763-416d-803d-16fec7332771\") " pod="openshift-marketplace/certified-operators-khvm7" Jan 25 00:32:29 crc kubenswrapper[4947]: I0125 00:32:29.744071 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lph5f\" (UniqueName: \"kubernetes.io/projected/af43bb2d-e763-416d-803d-16fec7332771-kube-api-access-lph5f\") pod \"certified-operators-khvm7\" (UID: \"af43bb2d-e763-416d-803d-16fec7332771\") " pod="openshift-marketplace/certified-operators-khvm7" Jan 25 00:32:29 crc kubenswrapper[4947]: I0125 00:32:29.772585 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-khvm7" Jan 25 00:32:30 crc kubenswrapper[4947]: I0125 00:32:30.227889 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-khvm7"] Jan 25 00:32:30 crc kubenswrapper[4947]: I0125 00:32:30.535749 4947 generic.go:334] "Generic (PLEG): container finished" podID="af43bb2d-e763-416d-803d-16fec7332771" containerID="3b5d100dc6d315ff476d1d5918266454374cbb046c8dd09419aacaed19996070" exitCode=0 Jan 25 00:32:30 crc kubenswrapper[4947]: I0125 00:32:30.535813 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-khvm7" event={"ID":"af43bb2d-e763-416d-803d-16fec7332771","Type":"ContainerDied","Data":"3b5d100dc6d315ff476d1d5918266454374cbb046c8dd09419aacaed19996070"} Jan 25 00:32:30 crc kubenswrapper[4947]: I0125 00:32:30.536148 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-khvm7" event={"ID":"af43bb2d-e763-416d-803d-16fec7332771","Type":"ContainerStarted","Data":"d56d5895bb34a9485e4d216be271276937d7d7a301e7d79984299b26a86ea042"} Jan 25 00:32:30 crc kubenswrapper[4947]: I0125 00:32:30.539632 4947 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 25 00:32:31 crc kubenswrapper[4947]: I0125 00:32:31.545705 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-khvm7" event={"ID":"af43bb2d-e763-416d-803d-16fec7332771","Type":"ContainerStarted","Data":"eed5d033314da1bcbc2e52cf9e1e2bd3420778ab20933b2ef804e302ab801e05"} Jan 25 00:32:31 crc kubenswrapper[4947]: I0125 00:32:31.644750 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b77dd"] Jan 25 00:32:31 crc kubenswrapper[4947]: I0125 00:32:31.645837 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b77dd" Jan 25 00:32:31 crc kubenswrapper[4947]: I0125 00:32:31.664237 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b77dd"] Jan 25 00:32:31 crc kubenswrapper[4947]: I0125 00:32:31.747144 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70732187-2d74-4e3a-9e95-79c235f70b3b-utilities\") pod \"redhat-operators-b77dd\" (UID: \"70732187-2d74-4e3a-9e95-79c235f70b3b\") " pod="openshift-marketplace/redhat-operators-b77dd" Jan 25 00:32:31 crc kubenswrapper[4947]: I0125 00:32:31.747282 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4klrq\" (UniqueName: \"kubernetes.io/projected/70732187-2d74-4e3a-9e95-79c235f70b3b-kube-api-access-4klrq\") pod \"redhat-operators-b77dd\" (UID: \"70732187-2d74-4e3a-9e95-79c235f70b3b\") " pod="openshift-marketplace/redhat-operators-b77dd" Jan 25 00:32:31 crc kubenswrapper[4947]: I0125 00:32:31.747341 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70732187-2d74-4e3a-9e95-79c235f70b3b-catalog-content\") pod \"redhat-operators-b77dd\" (UID: \"70732187-2d74-4e3a-9e95-79c235f70b3b\") " pod="openshift-marketplace/redhat-operators-b77dd" Jan 25 00:32:31 crc kubenswrapper[4947]: I0125 00:32:31.848737 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70732187-2d74-4e3a-9e95-79c235f70b3b-catalog-content\") pod \"redhat-operators-b77dd\" (UID: \"70732187-2d74-4e3a-9e95-79c235f70b3b\") " pod="openshift-marketplace/redhat-operators-b77dd" Jan 25 00:32:31 crc kubenswrapper[4947]: I0125 00:32:31.848887 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70732187-2d74-4e3a-9e95-79c235f70b3b-utilities\") pod \"redhat-operators-b77dd\" (UID: \"70732187-2d74-4e3a-9e95-79c235f70b3b\") " pod="openshift-marketplace/redhat-operators-b77dd" Jan 25 00:32:31 crc kubenswrapper[4947]: I0125 00:32:31.848975 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4klrq\" (UniqueName: \"kubernetes.io/projected/70732187-2d74-4e3a-9e95-79c235f70b3b-kube-api-access-4klrq\") pod \"redhat-operators-b77dd\" (UID: \"70732187-2d74-4e3a-9e95-79c235f70b3b\") " pod="openshift-marketplace/redhat-operators-b77dd" Jan 25 00:32:31 crc kubenswrapper[4947]: I0125 00:32:31.849423 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70732187-2d74-4e3a-9e95-79c235f70b3b-catalog-content\") pod \"redhat-operators-b77dd\" (UID: \"70732187-2d74-4e3a-9e95-79c235f70b3b\") " pod="openshift-marketplace/redhat-operators-b77dd" Jan 25 00:32:31 crc kubenswrapper[4947]: I0125 00:32:31.849596 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70732187-2d74-4e3a-9e95-79c235f70b3b-utilities\") pod \"redhat-operators-b77dd\" (UID: \"70732187-2d74-4e3a-9e95-79c235f70b3b\") " pod="openshift-marketplace/redhat-operators-b77dd" Jan 25 00:32:31 crc kubenswrapper[4947]: I0125 00:32:31.869675 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4klrq\" (UniqueName: \"kubernetes.io/projected/70732187-2d74-4e3a-9e95-79c235f70b3b-kube-api-access-4klrq\") pod \"redhat-operators-b77dd\" (UID: \"70732187-2d74-4e3a-9e95-79c235f70b3b\") " pod="openshift-marketplace/redhat-operators-b77dd" Jan 25 00:32:31 crc kubenswrapper[4947]: I0125 00:32:31.962209 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b77dd" Jan 25 00:32:32 crc kubenswrapper[4947]: I0125 00:32:32.174999 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b77dd"] Jan 25 00:32:32 crc kubenswrapper[4947]: W0125 00:32:32.186635 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70732187_2d74_4e3a_9e95_79c235f70b3b.slice/crio-a9c1e2ac91c88bb7647a2359a8af63858c17c41cf83a204cddc41485659caf41 WatchSource:0}: Error finding container a9c1e2ac91c88bb7647a2359a8af63858c17c41cf83a204cddc41485659caf41: Status 404 returned error can't find the container with id a9c1e2ac91c88bb7647a2359a8af63858c17c41cf83a204cddc41485659caf41 Jan 25 00:32:32 crc kubenswrapper[4947]: I0125 00:32:32.551868 4947 generic.go:334] "Generic (PLEG): container finished" podID="70732187-2d74-4e3a-9e95-79c235f70b3b" containerID="c6743a7e7ecc0059b0505718650e4f5040a17357b9eb78850169d09d180bc4b6" exitCode=0 Jan 25 00:32:32 crc kubenswrapper[4947]: I0125 00:32:32.551941 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b77dd" event={"ID":"70732187-2d74-4e3a-9e95-79c235f70b3b","Type":"ContainerDied","Data":"c6743a7e7ecc0059b0505718650e4f5040a17357b9eb78850169d09d180bc4b6"} Jan 25 00:32:32 crc kubenswrapper[4947]: I0125 00:32:32.551976 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b77dd" event={"ID":"70732187-2d74-4e3a-9e95-79c235f70b3b","Type":"ContainerStarted","Data":"a9c1e2ac91c88bb7647a2359a8af63858c17c41cf83a204cddc41485659caf41"} Jan 25 00:32:32 crc kubenswrapper[4947]: I0125 00:32:32.554002 4947 generic.go:334] "Generic (PLEG): container finished" podID="af43bb2d-e763-416d-803d-16fec7332771" containerID="eed5d033314da1bcbc2e52cf9e1e2bd3420778ab20933b2ef804e302ab801e05" exitCode=0 Jan 25 00:32:32 crc kubenswrapper[4947]: I0125 00:32:32.554092 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-khvm7" event={"ID":"af43bb2d-e763-416d-803d-16fec7332771","Type":"ContainerDied","Data":"eed5d033314da1bcbc2e52cf9e1e2bd3420778ab20933b2ef804e302ab801e05"} Jan 25 00:32:33 crc kubenswrapper[4947]: I0125 00:32:33.562112 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b77dd" event={"ID":"70732187-2d74-4e3a-9e95-79c235f70b3b","Type":"ContainerStarted","Data":"c6a105f7c39acfe05ab88437cff01895af8541ce8b9c12597c0a6f71c84da4de"} Jan 25 00:32:33 crc kubenswrapper[4947]: I0125 00:32:33.564824 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-khvm7" event={"ID":"af43bb2d-e763-416d-803d-16fec7332771","Type":"ContainerStarted","Data":"a4bcd8438bd8d926fe64439908fcec662adf3b56502ee9d48c75bf06bbf3a249"} Jan 25 00:32:33 crc kubenswrapper[4947]: I0125 00:32:33.592850 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-khvm7" podStartSLOduration=2.0813164730000002 podStartE2EDuration="4.592831204s" podCreationTimestamp="2026-01-25 00:32:29 +0000 UTC" firstStartedPulling="2026-01-25 00:32:30.539283109 +0000 UTC m=+1389.772273549" lastFinishedPulling="2026-01-25 00:32:33.05079785 +0000 UTC m=+1392.283788280" observedRunningTime="2026-01-25 00:32:33.592078537 +0000 UTC m=+1392.825068977" watchObservedRunningTime="2026-01-25 00:32:33.592831204 +0000 UTC m=+1392.825821644" Jan 25 00:32:34 crc kubenswrapper[4947]: I0125 00:32:34.579040 4947 generic.go:334] "Generic (PLEG): container finished" podID="70732187-2d74-4e3a-9e95-79c235f70b3b" containerID="c6a105f7c39acfe05ab88437cff01895af8541ce8b9c12597c0a6f71c84da4de" exitCode=0 Jan 25 00:32:34 crc kubenswrapper[4947]: I0125 00:32:34.579250 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b77dd" event={"ID":"70732187-2d74-4e3a-9e95-79c235f70b3b","Type":"ContainerDied","Data":"c6a105f7c39acfe05ab88437cff01895af8541ce8b9c12597c0a6f71c84da4de"} Jan 25 00:32:35 crc kubenswrapper[4947]: I0125 00:32:35.588452 4947 generic.go:334] "Generic (PLEG): container finished" podID="5e48b97d-b6da-46e1-805a-1573652be38c" containerID="31eed73ad1635b22a1b5134bc06b4c2144f4ba65ecd33dca5ba7c98cd031bcc9" exitCode=0 Jan 25 00:32:35 crc kubenswrapper[4947]: I0125 00:32:35.588606 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"5e48b97d-b6da-46e1-805a-1573652be38c","Type":"ContainerDied","Data":"31eed73ad1635b22a1b5134bc06b4c2144f4ba65ecd33dca5ba7c98cd031bcc9"} Jan 25 00:32:35 crc kubenswrapper[4947]: I0125 00:32:35.592712 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b77dd" event={"ID":"70732187-2d74-4e3a-9e95-79c235f70b3b","Type":"ContainerStarted","Data":"6b6c23491afe2dd8e0391c40128ba92ddb3ffab0b8d78076d0bd5198424e911c"} Jan 25 00:32:35 crc kubenswrapper[4947]: I0125 00:32:35.634609 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b77dd" podStartSLOduration=2.06427527 podStartE2EDuration="4.634594221s" podCreationTimestamp="2026-01-25 00:32:31 +0000 UTC" firstStartedPulling="2026-01-25 00:32:32.553459993 +0000 UTC m=+1391.786450423" lastFinishedPulling="2026-01-25 00:32:35.123778934 +0000 UTC m=+1394.356769374" observedRunningTime="2026-01-25 00:32:35.631791884 +0000 UTC m=+1394.864782314" watchObservedRunningTime="2026-01-25 00:32:35.634594221 +0000 UTC m=+1394.867584661" Jan 25 00:32:36 crc kubenswrapper[4947]: I0125 00:32:36.903226 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.020482 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-container-storage-root\") pod \"5e48b97d-b6da-46e1-805a-1573652be38c\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.020545 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-build-blob-cache\") pod \"5e48b97d-b6da-46e1-805a-1573652be38c\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.020577 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5e48b97d-b6da-46e1-805a-1573652be38c-buildcachedir\") pod \"5e48b97d-b6da-46e1-805a-1573652be38c\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.020621 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e48b97d-b6da-46e1-805a-1573652be38c-build-proxy-ca-bundles\") pod \"5e48b97d-b6da-46e1-805a-1573652be38c\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.020655 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5e48b97d-b6da-46e1-805a-1573652be38c-build-system-configs\") pod \"5e48b97d-b6da-46e1-805a-1573652be38c\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.020681 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e48b97d-b6da-46e1-805a-1573652be38c-build-ca-bundles\") pod \"5e48b97d-b6da-46e1-805a-1573652be38c\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.020713 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/5e48b97d-b6da-46e1-805a-1573652be38c-builder-dockercfg-jd52j-pull\") pod \"5e48b97d-b6da-46e1-805a-1573652be38c\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.020761 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5rgf\" (UniqueName: \"kubernetes.io/projected/5e48b97d-b6da-46e1-805a-1573652be38c-kube-api-access-r5rgf\") pod \"5e48b97d-b6da-46e1-805a-1573652be38c\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.020785 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-buildworkdir\") pod \"5e48b97d-b6da-46e1-805a-1573652be38c\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.020824 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/5e48b97d-b6da-46e1-805a-1573652be38c-builder-dockercfg-jd52j-push\") pod \"5e48b97d-b6da-46e1-805a-1573652be38c\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.020845 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5e48b97d-b6da-46e1-805a-1573652be38c-node-pullsecrets\") pod \"5e48b97d-b6da-46e1-805a-1573652be38c\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.020869 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-container-storage-run\") pod \"5e48b97d-b6da-46e1-805a-1573652be38c\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.021239 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e48b97d-b6da-46e1-805a-1573652be38c-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "5e48b97d-b6da-46e1-805a-1573652be38c" (UID: "5e48b97d-b6da-46e1-805a-1573652be38c"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.022002 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "5e48b97d-b6da-46e1-805a-1573652be38c" (UID: "5e48b97d-b6da-46e1-805a-1573652be38c"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.022004 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e48b97d-b6da-46e1-805a-1573652be38c-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "5e48b97d-b6da-46e1-805a-1573652be38c" (UID: "5e48b97d-b6da-46e1-805a-1573652be38c"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.022598 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e48b97d-b6da-46e1-805a-1573652be38c-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "5e48b97d-b6da-46e1-805a-1573652be38c" (UID: "5e48b97d-b6da-46e1-805a-1573652be38c"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.022599 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e48b97d-b6da-46e1-805a-1573652be38c-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "5e48b97d-b6da-46e1-805a-1573652be38c" (UID: "5e48b97d-b6da-46e1-805a-1573652be38c"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.023152 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "5e48b97d-b6da-46e1-805a-1573652be38c" (UID: "5e48b97d-b6da-46e1-805a-1573652be38c"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.024930 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e48b97d-b6da-46e1-805a-1573652be38c-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "5e48b97d-b6da-46e1-805a-1573652be38c" (UID: "5e48b97d-b6da-46e1-805a-1573652be38c"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.029053 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e48b97d-b6da-46e1-805a-1573652be38c-builder-dockercfg-jd52j-push" (OuterVolumeSpecName: "builder-dockercfg-jd52j-push") pod "5e48b97d-b6da-46e1-805a-1573652be38c" (UID: "5e48b97d-b6da-46e1-805a-1573652be38c"). InnerVolumeSpecName "builder-dockercfg-jd52j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.029494 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e48b97d-b6da-46e1-805a-1573652be38c-builder-dockercfg-jd52j-pull" (OuterVolumeSpecName: "builder-dockercfg-jd52j-pull") pod "5e48b97d-b6da-46e1-805a-1573652be38c" (UID: "5e48b97d-b6da-46e1-805a-1573652be38c"). InnerVolumeSpecName "builder-dockercfg-jd52j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.030006 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e48b97d-b6da-46e1-805a-1573652be38c-kube-api-access-r5rgf" (OuterVolumeSpecName: "kube-api-access-r5rgf") pod "5e48b97d-b6da-46e1-805a-1573652be38c" (UID: "5e48b97d-b6da-46e1-805a-1573652be38c"). InnerVolumeSpecName "kube-api-access-r5rgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.122922 4947 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5e48b97d-b6da-46e1-805a-1573652be38c-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.122975 4947 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e48b97d-b6da-46e1-805a-1573652be38c-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.122990 4947 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5e48b97d-b6da-46e1-805a-1573652be38c-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.123003 4947 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e48b97d-b6da-46e1-805a-1573652be38c-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.123014 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/5e48b97d-b6da-46e1-805a-1573652be38c-builder-dockercfg-jd52j-pull\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.123027 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5rgf\" (UniqueName: \"kubernetes.io/projected/5e48b97d-b6da-46e1-805a-1573652be38c-kube-api-access-r5rgf\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.123042 4947 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.123053 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/5e48b97d-b6da-46e1-805a-1573652be38c-builder-dockercfg-jd52j-push\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.123064 4947 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5e48b97d-b6da-46e1-805a-1573652be38c-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.123075 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.140803 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "5e48b97d-b6da-46e1-805a-1573652be38c" (UID: "5e48b97d-b6da-46e1-805a-1573652be38c"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.224691 4947 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.608236 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"5e48b97d-b6da-46e1-805a-1573652be38c","Type":"ContainerDied","Data":"41f32d7801129b67e6b416864000e59d2e61a76199bac60f5ea8a250a0e45fd8"} Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.608276 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41f32d7801129b67e6b416864000e59d2e61a76199bac60f5ea8a250a0e45fd8" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.608335 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.809852 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "5e48b97d-b6da-46e1-805a-1573652be38c" (UID: "5e48b97d-b6da-46e1-805a-1573652be38c"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.834769 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:39 crc kubenswrapper[4947]: I0125 00:32:39.773225 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-khvm7" Jan 25 00:32:39 crc kubenswrapper[4947]: I0125 00:32:39.773629 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-khvm7" Jan 25 00:32:39 crc kubenswrapper[4947]: I0125 00:32:39.817931 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-khvm7" Jan 25 00:32:40 crc kubenswrapper[4947]: I0125 00:32:40.683802 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-khvm7" Jan 25 00:32:40 crc kubenswrapper[4947]: I0125 00:32:40.839056 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-khvm7"] Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.044367 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 25 00:32:41 crc kubenswrapper[4947]: E0125 00:32:41.045060 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e48b97d-b6da-46e1-805a-1573652be38c" containerName="manage-dockerfile" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.045081 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e48b97d-b6da-46e1-805a-1573652be38c" containerName="manage-dockerfile" Jan 25 00:32:41 crc kubenswrapper[4947]: E0125 00:32:41.045120 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e48b97d-b6da-46e1-805a-1573652be38c" containerName="git-clone" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.045164 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e48b97d-b6da-46e1-805a-1573652be38c" containerName="git-clone" Jan 25 00:32:41 crc kubenswrapper[4947]: E0125 00:32:41.045176 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e48b97d-b6da-46e1-805a-1573652be38c" containerName="docker-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.045184 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e48b97d-b6da-46e1-805a-1573652be38c" containerName="docker-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.045376 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e48b97d-b6da-46e1-805a-1573652be38c" containerName="docker-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.046170 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.047988 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-ca" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.048160 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-jd52j" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.048653 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-sys-config" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.051615 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-global-ca" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.065844 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.179647 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/45b0bbe8-2222-47ff-b23d-f2e571562df0-builder-dockercfg-jd52j-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.179725 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.179746 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97rtz\" (UniqueName: \"kubernetes.io/projected/45b0bbe8-2222-47ff-b23d-f2e571562df0-kube-api-access-97rtz\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.179780 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/45b0bbe8-2222-47ff-b23d-f2e571562df0-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.179798 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.179888 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.179919 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/45b0bbe8-2222-47ff-b23d-f2e571562df0-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.179947 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/45b0bbe8-2222-47ff-b23d-f2e571562df0-builder-dockercfg-jd52j-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.179981 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.180001 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.180028 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.180052 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.281441 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/45b0bbe8-2222-47ff-b23d-f2e571562df0-builder-dockercfg-jd52j-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.281508 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.281539 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97rtz\" (UniqueName: \"kubernetes.io/projected/45b0bbe8-2222-47ff-b23d-f2e571562df0-kube-api-access-97rtz\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.281589 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/45b0bbe8-2222-47ff-b23d-f2e571562df0-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.281611 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.281661 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.281694 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/45b0bbe8-2222-47ff-b23d-f2e571562df0-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.281726 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/45b0bbe8-2222-47ff-b23d-f2e571562df0-builder-dockercfg-jd52j-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.281745 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.281767 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.281797 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.281827 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.282418 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/45b0bbe8-2222-47ff-b23d-f2e571562df0-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.282720 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/45b0bbe8-2222-47ff-b23d-f2e571562df0-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.282748 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.282843 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.283033 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.283179 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.283259 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.283405 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.283455 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.287259 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/45b0bbe8-2222-47ff-b23d-f2e571562df0-builder-dockercfg-jd52j-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.288156 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/45b0bbe8-2222-47ff-b23d-f2e571562df0-builder-dockercfg-jd52j-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.305982 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97rtz\" (UniqueName: \"kubernetes.io/projected/45b0bbe8-2222-47ff-b23d-f2e571562df0-kube-api-access-97rtz\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.376412 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.904327 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.964507 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b77dd" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.964675 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b77dd" Jan 25 00:32:42 crc kubenswrapper[4947]: I0125 00:32:42.025546 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b77dd" Jan 25 00:32:42 crc kubenswrapper[4947]: I0125 00:32:42.649500 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"45b0bbe8-2222-47ff-b23d-f2e571562df0","Type":"ContainerStarted","Data":"26747227c318ee56f01386f2aef39290d6a1c144b6fa6c040fda3c0ee559d715"} Jan 25 00:32:42 crc kubenswrapper[4947]: I0125 00:32:42.649927 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-khvm7" podUID="af43bb2d-e763-416d-803d-16fec7332771" containerName="registry-server" containerID="cri-o://a4bcd8438bd8d926fe64439908fcec662adf3b56502ee9d48c75bf06bbf3a249" gracePeriod=2 Jan 25 00:32:42 crc kubenswrapper[4947]: I0125 00:32:42.709576 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b77dd" Jan 25 00:32:43 crc kubenswrapper[4947]: I0125 00:32:43.244682 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b77dd"] Jan 25 00:32:43 crc kubenswrapper[4947]: I0125 00:32:43.658867 4947 generic.go:334] "Generic (PLEG): container finished" podID="af43bb2d-e763-416d-803d-16fec7332771" containerID="a4bcd8438bd8d926fe64439908fcec662adf3b56502ee9d48c75bf06bbf3a249" exitCode=0 Jan 25 00:32:43 crc kubenswrapper[4947]: I0125 00:32:43.658952 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-khvm7" event={"ID":"af43bb2d-e763-416d-803d-16fec7332771","Type":"ContainerDied","Data":"a4bcd8438bd8d926fe64439908fcec662adf3b56502ee9d48c75bf06bbf3a249"} Jan 25 00:32:43 crc kubenswrapper[4947]: I0125 00:32:43.660513 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"45b0bbe8-2222-47ff-b23d-f2e571562df0","Type":"ContainerStarted","Data":"d07fd80933ae56a5d55a4b6d74e59410bbcab1eb969ab1d509535830b3547eb5"} Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.246940 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-khvm7" Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.441983 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lph5f\" (UniqueName: \"kubernetes.io/projected/af43bb2d-e763-416d-803d-16fec7332771-kube-api-access-lph5f\") pod \"af43bb2d-e763-416d-803d-16fec7332771\" (UID: \"af43bb2d-e763-416d-803d-16fec7332771\") " Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.442064 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af43bb2d-e763-416d-803d-16fec7332771-catalog-content\") pod \"af43bb2d-e763-416d-803d-16fec7332771\" (UID: \"af43bb2d-e763-416d-803d-16fec7332771\") " Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.442092 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af43bb2d-e763-416d-803d-16fec7332771-utilities\") pod \"af43bb2d-e763-416d-803d-16fec7332771\" (UID: \"af43bb2d-e763-416d-803d-16fec7332771\") " Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.443354 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af43bb2d-e763-416d-803d-16fec7332771-utilities" (OuterVolumeSpecName: "utilities") pod "af43bb2d-e763-416d-803d-16fec7332771" (UID: "af43bb2d-e763-416d-803d-16fec7332771"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.447751 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af43bb2d-e763-416d-803d-16fec7332771-kube-api-access-lph5f" (OuterVolumeSpecName: "kube-api-access-lph5f") pod "af43bb2d-e763-416d-803d-16fec7332771" (UID: "af43bb2d-e763-416d-803d-16fec7332771"). InnerVolumeSpecName "kube-api-access-lph5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.523415 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af43bb2d-e763-416d-803d-16fec7332771-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af43bb2d-e763-416d-803d-16fec7332771" (UID: "af43bb2d-e763-416d-803d-16fec7332771"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.544163 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lph5f\" (UniqueName: \"kubernetes.io/projected/af43bb2d-e763-416d-803d-16fec7332771-kube-api-access-lph5f\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.544237 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af43bb2d-e763-416d-803d-16fec7332771-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.544294 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af43bb2d-e763-416d-803d-16fec7332771-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.671480 4947 generic.go:334] "Generic (PLEG): container finished" podID="45b0bbe8-2222-47ff-b23d-f2e571562df0" containerID="d07fd80933ae56a5d55a4b6d74e59410bbcab1eb969ab1d509535830b3547eb5" exitCode=0 Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.671541 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"45b0bbe8-2222-47ff-b23d-f2e571562df0","Type":"ContainerDied","Data":"d07fd80933ae56a5d55a4b6d74e59410bbcab1eb969ab1d509535830b3547eb5"} Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.675405 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b77dd" podUID="70732187-2d74-4e3a-9e95-79c235f70b3b" containerName="registry-server" containerID="cri-o://6b6c23491afe2dd8e0391c40128ba92ddb3ffab0b8d78076d0bd5198424e911c" gracePeriod=2 Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.675569 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-khvm7" Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.676770 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-khvm7" event={"ID":"af43bb2d-e763-416d-803d-16fec7332771","Type":"ContainerDied","Data":"d56d5895bb34a9485e4d216be271276937d7d7a301e7d79984299b26a86ea042"} Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.676813 4947 scope.go:117] "RemoveContainer" containerID="a4bcd8438bd8d926fe64439908fcec662adf3b56502ee9d48c75bf06bbf3a249" Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.713112 4947 scope.go:117] "RemoveContainer" containerID="eed5d033314da1bcbc2e52cf9e1e2bd3420778ab20933b2ef804e302ab801e05" Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.764261 4947 scope.go:117] "RemoveContainer" containerID="3b5d100dc6d315ff476d1d5918266454374cbb046c8dd09419aacaed19996070" Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.766671 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-khvm7"] Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.772509 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-khvm7"] Jan 25 00:32:45 crc kubenswrapper[4947]: I0125 00:32:45.103805 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af43bb2d-e763-416d-803d-16fec7332771" path="/var/lib/kubelet/pods/af43bb2d-e763-416d-803d-16fec7332771/volumes" Jan 25 00:32:46 crc kubenswrapper[4947]: I0125 00:32:46.254432 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b77dd" Jan 25 00:32:46 crc kubenswrapper[4947]: I0125 00:32:46.369930 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70732187-2d74-4e3a-9e95-79c235f70b3b-utilities\") pod \"70732187-2d74-4e3a-9e95-79c235f70b3b\" (UID: \"70732187-2d74-4e3a-9e95-79c235f70b3b\") " Jan 25 00:32:46 crc kubenswrapper[4947]: I0125 00:32:46.370047 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70732187-2d74-4e3a-9e95-79c235f70b3b-catalog-content\") pod \"70732187-2d74-4e3a-9e95-79c235f70b3b\" (UID: \"70732187-2d74-4e3a-9e95-79c235f70b3b\") " Jan 25 00:32:46 crc kubenswrapper[4947]: I0125 00:32:46.370223 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4klrq\" (UniqueName: \"kubernetes.io/projected/70732187-2d74-4e3a-9e95-79c235f70b3b-kube-api-access-4klrq\") pod \"70732187-2d74-4e3a-9e95-79c235f70b3b\" (UID: \"70732187-2d74-4e3a-9e95-79c235f70b3b\") " Jan 25 00:32:46 crc kubenswrapper[4947]: I0125 00:32:46.371302 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70732187-2d74-4e3a-9e95-79c235f70b3b-utilities" (OuterVolumeSpecName: "utilities") pod "70732187-2d74-4e3a-9e95-79c235f70b3b" (UID: "70732187-2d74-4e3a-9e95-79c235f70b3b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:32:46 crc kubenswrapper[4947]: I0125 00:32:46.377359 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70732187-2d74-4e3a-9e95-79c235f70b3b-kube-api-access-4klrq" (OuterVolumeSpecName: "kube-api-access-4klrq") pod "70732187-2d74-4e3a-9e95-79c235f70b3b" (UID: "70732187-2d74-4e3a-9e95-79c235f70b3b"). InnerVolumeSpecName "kube-api-access-4klrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:32:46 crc kubenswrapper[4947]: I0125 00:32:46.472052 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4klrq\" (UniqueName: \"kubernetes.io/projected/70732187-2d74-4e3a-9e95-79c235f70b3b-kube-api-access-4klrq\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:46 crc kubenswrapper[4947]: I0125 00:32:46.472090 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70732187-2d74-4e3a-9e95-79c235f70b3b-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:46 crc kubenswrapper[4947]: I0125 00:32:46.694843 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"45b0bbe8-2222-47ff-b23d-f2e571562df0","Type":"ContainerStarted","Data":"10cfbfd273a389f2d3bc2bd418daae0276661ae26d3ea485d785387716756cc1"} Jan 25 00:32:46 crc kubenswrapper[4947]: I0125 00:32:46.698663 4947 generic.go:334] "Generic (PLEG): container finished" podID="70732187-2d74-4e3a-9e95-79c235f70b3b" containerID="6b6c23491afe2dd8e0391c40128ba92ddb3ffab0b8d78076d0bd5198424e911c" exitCode=0 Jan 25 00:32:46 crc kubenswrapper[4947]: I0125 00:32:46.698738 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b77dd" event={"ID":"70732187-2d74-4e3a-9e95-79c235f70b3b","Type":"ContainerDied","Data":"6b6c23491afe2dd8e0391c40128ba92ddb3ffab0b8d78076d0bd5198424e911c"} Jan 25 00:32:46 crc kubenswrapper[4947]: I0125 00:32:46.698781 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b77dd" event={"ID":"70732187-2d74-4e3a-9e95-79c235f70b3b","Type":"ContainerDied","Data":"a9c1e2ac91c88bb7647a2359a8af63858c17c41cf83a204cddc41485659caf41"} Jan 25 00:32:46 crc kubenswrapper[4947]: I0125 00:32:46.698813 4947 scope.go:117] "RemoveContainer" containerID="6b6c23491afe2dd8e0391c40128ba92ddb3ffab0b8d78076d0bd5198424e911c" Jan 25 00:32:46 crc kubenswrapper[4947]: I0125 00:32:46.698999 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b77dd" Jan 25 00:32:47 crc kubenswrapper[4947]: I0125 00:32:47.185515 4947 scope.go:117] "RemoveContainer" containerID="c6a105f7c39acfe05ab88437cff01895af8541ce8b9c12597c0a6f71c84da4de" Jan 25 00:32:47 crc kubenswrapper[4947]: I0125 00:32:47.225577 4947 scope.go:117] "RemoveContainer" containerID="c6743a7e7ecc0059b0505718650e4f5040a17357b9eb78850169d09d180bc4b6" Jan 25 00:32:47 crc kubenswrapper[4947]: I0125 00:32:47.266809 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-1-build" podStartSLOduration=6.266776044 podStartE2EDuration="6.266776044s" podCreationTimestamp="2026-01-25 00:32:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:32:47.262114833 +0000 UTC m=+1406.495105313" watchObservedRunningTime="2026-01-25 00:32:47.266776044 +0000 UTC m=+1406.499766484" Jan 25 00:32:47 crc kubenswrapper[4947]: I0125 00:32:47.269710 4947 scope.go:117] "RemoveContainer" containerID="6b6c23491afe2dd8e0391c40128ba92ddb3ffab0b8d78076d0bd5198424e911c" Jan 25 00:32:47 crc kubenswrapper[4947]: E0125 00:32:47.270311 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b6c23491afe2dd8e0391c40128ba92ddb3ffab0b8d78076d0bd5198424e911c\": container with ID starting with 6b6c23491afe2dd8e0391c40128ba92ddb3ffab0b8d78076d0bd5198424e911c not found: ID does not exist" containerID="6b6c23491afe2dd8e0391c40128ba92ddb3ffab0b8d78076d0bd5198424e911c" Jan 25 00:32:47 crc kubenswrapper[4947]: I0125 00:32:47.270358 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b6c23491afe2dd8e0391c40128ba92ddb3ffab0b8d78076d0bd5198424e911c"} err="failed to get container status \"6b6c23491afe2dd8e0391c40128ba92ddb3ffab0b8d78076d0bd5198424e911c\": rpc error: code = NotFound desc = could not find container \"6b6c23491afe2dd8e0391c40128ba92ddb3ffab0b8d78076d0bd5198424e911c\": container with ID starting with 6b6c23491afe2dd8e0391c40128ba92ddb3ffab0b8d78076d0bd5198424e911c not found: ID does not exist" Jan 25 00:32:47 crc kubenswrapper[4947]: I0125 00:32:47.270390 4947 scope.go:117] "RemoveContainer" containerID="c6a105f7c39acfe05ab88437cff01895af8541ce8b9c12597c0a6f71c84da4de" Jan 25 00:32:47 crc kubenswrapper[4947]: E0125 00:32:47.270913 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6a105f7c39acfe05ab88437cff01895af8541ce8b9c12597c0a6f71c84da4de\": container with ID starting with c6a105f7c39acfe05ab88437cff01895af8541ce8b9c12597c0a6f71c84da4de not found: ID does not exist" containerID="c6a105f7c39acfe05ab88437cff01895af8541ce8b9c12597c0a6f71c84da4de" Jan 25 00:32:47 crc kubenswrapper[4947]: I0125 00:32:47.270942 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6a105f7c39acfe05ab88437cff01895af8541ce8b9c12597c0a6f71c84da4de"} err="failed to get container status \"c6a105f7c39acfe05ab88437cff01895af8541ce8b9c12597c0a6f71c84da4de\": rpc error: code = NotFound desc = could not find container \"c6a105f7c39acfe05ab88437cff01895af8541ce8b9c12597c0a6f71c84da4de\": container with ID starting with c6a105f7c39acfe05ab88437cff01895af8541ce8b9c12597c0a6f71c84da4de not found: ID does not exist" Jan 25 00:32:47 crc kubenswrapper[4947]: I0125 00:32:47.270956 4947 scope.go:117] "RemoveContainer" containerID="c6743a7e7ecc0059b0505718650e4f5040a17357b9eb78850169d09d180bc4b6" Jan 25 00:32:47 crc kubenswrapper[4947]: E0125 00:32:47.271199 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6743a7e7ecc0059b0505718650e4f5040a17357b9eb78850169d09d180bc4b6\": container with ID starting with c6743a7e7ecc0059b0505718650e4f5040a17357b9eb78850169d09d180bc4b6 not found: ID does not exist" containerID="c6743a7e7ecc0059b0505718650e4f5040a17357b9eb78850169d09d180bc4b6" Jan 25 00:32:47 crc kubenswrapper[4947]: I0125 00:32:47.271220 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6743a7e7ecc0059b0505718650e4f5040a17357b9eb78850169d09d180bc4b6"} err="failed to get container status \"c6743a7e7ecc0059b0505718650e4f5040a17357b9eb78850169d09d180bc4b6\": rpc error: code = NotFound desc = could not find container \"c6743a7e7ecc0059b0505718650e4f5040a17357b9eb78850169d09d180bc4b6\": container with ID starting with c6743a7e7ecc0059b0505718650e4f5040a17357b9eb78850169d09d180bc4b6 not found: ID does not exist" Jan 25 00:32:48 crc kubenswrapper[4947]: I0125 00:32:48.604284 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70732187-2d74-4e3a-9e95-79c235f70b3b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "70732187-2d74-4e3a-9e95-79c235f70b3b" (UID: "70732187-2d74-4e3a-9e95-79c235f70b3b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:32:48 crc kubenswrapper[4947]: I0125 00:32:48.606879 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70732187-2d74-4e3a-9e95-79c235f70b3b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:48 crc kubenswrapper[4947]: I0125 00:32:48.838762 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b77dd"] Jan 25 00:32:48 crc kubenswrapper[4947]: I0125 00:32:48.846519 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b77dd"] Jan 25 00:32:49 crc kubenswrapper[4947]: I0125 00:32:49.105963 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70732187-2d74-4e3a-9e95-79c235f70b3b" path="/var/lib/kubelet/pods/70732187-2d74-4e3a-9e95-79c235f70b3b/volumes" Jan 25 00:32:51 crc kubenswrapper[4947]: I0125 00:32:51.847404 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 25 00:32:51 crc kubenswrapper[4947]: I0125 00:32:51.848541 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/prometheus-webhook-snmp-1-build" podUID="45b0bbe8-2222-47ff-b23d-f2e571562df0" containerName="docker-build" containerID="cri-o://10cfbfd273a389f2d3bc2bd418daae0276661ae26d3ea485d785387716756cc1" gracePeriod=30 Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.301835 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_45b0bbe8-2222-47ff-b23d-f2e571562df0/docker-build/0.log" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.302832 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.483150 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/45b0bbe8-2222-47ff-b23d-f2e571562df0-buildcachedir\") pod \"45b0bbe8-2222-47ff-b23d-f2e571562df0\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.483275 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-system-configs\") pod \"45b0bbe8-2222-47ff-b23d-f2e571562df0\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.483307 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45b0bbe8-2222-47ff-b23d-f2e571562df0-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "45b0bbe8-2222-47ff-b23d-f2e571562df0" (UID: "45b0bbe8-2222-47ff-b23d-f2e571562df0"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.483357 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/45b0bbe8-2222-47ff-b23d-f2e571562df0-node-pullsecrets\") pod \"45b0bbe8-2222-47ff-b23d-f2e571562df0\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.483395 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-buildworkdir\") pod \"45b0bbe8-2222-47ff-b23d-f2e571562df0\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.483421 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-proxy-ca-bundles\") pod \"45b0bbe8-2222-47ff-b23d-f2e571562df0\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.483447 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/45b0bbe8-2222-47ff-b23d-f2e571562df0-builder-dockercfg-jd52j-push\") pod \"45b0bbe8-2222-47ff-b23d-f2e571562df0\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.483501 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-ca-bundles\") pod \"45b0bbe8-2222-47ff-b23d-f2e571562df0\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.483561 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/45b0bbe8-2222-47ff-b23d-f2e571562df0-builder-dockercfg-jd52j-pull\") pod \"45b0bbe8-2222-47ff-b23d-f2e571562df0\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.483555 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45b0bbe8-2222-47ff-b23d-f2e571562df0-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "45b0bbe8-2222-47ff-b23d-f2e571562df0" (UID: "45b0bbe8-2222-47ff-b23d-f2e571562df0"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.483588 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-blob-cache\") pod \"45b0bbe8-2222-47ff-b23d-f2e571562df0\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.483636 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97rtz\" (UniqueName: \"kubernetes.io/projected/45b0bbe8-2222-47ff-b23d-f2e571562df0-kube-api-access-97rtz\") pod \"45b0bbe8-2222-47ff-b23d-f2e571562df0\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.483664 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-container-storage-run\") pod \"45b0bbe8-2222-47ff-b23d-f2e571562df0\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.483699 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-container-storage-root\") pod \"45b0bbe8-2222-47ff-b23d-f2e571562df0\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.484114 4947 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/45b0bbe8-2222-47ff-b23d-f2e571562df0-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.484162 4947 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/45b0bbe8-2222-47ff-b23d-f2e571562df0-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.484593 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "45b0bbe8-2222-47ff-b23d-f2e571562df0" (UID: "45b0bbe8-2222-47ff-b23d-f2e571562df0"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.484960 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "45b0bbe8-2222-47ff-b23d-f2e571562df0" (UID: "45b0bbe8-2222-47ff-b23d-f2e571562df0"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.485264 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "45b0bbe8-2222-47ff-b23d-f2e571562df0" (UID: "45b0bbe8-2222-47ff-b23d-f2e571562df0"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.485290 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "45b0bbe8-2222-47ff-b23d-f2e571562df0" (UID: "45b0bbe8-2222-47ff-b23d-f2e571562df0"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.486238 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "45b0bbe8-2222-47ff-b23d-f2e571562df0" (UID: "45b0bbe8-2222-47ff-b23d-f2e571562df0"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.486344 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "45b0bbe8-2222-47ff-b23d-f2e571562df0" (UID: "45b0bbe8-2222-47ff-b23d-f2e571562df0"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.495305 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45b0bbe8-2222-47ff-b23d-f2e571562df0-builder-dockercfg-jd52j-pull" (OuterVolumeSpecName: "builder-dockercfg-jd52j-pull") pod "45b0bbe8-2222-47ff-b23d-f2e571562df0" (UID: "45b0bbe8-2222-47ff-b23d-f2e571562df0"). InnerVolumeSpecName "builder-dockercfg-jd52j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.495398 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45b0bbe8-2222-47ff-b23d-f2e571562df0-kube-api-access-97rtz" (OuterVolumeSpecName: "kube-api-access-97rtz") pod "45b0bbe8-2222-47ff-b23d-f2e571562df0" (UID: "45b0bbe8-2222-47ff-b23d-f2e571562df0"). InnerVolumeSpecName "kube-api-access-97rtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.499004 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45b0bbe8-2222-47ff-b23d-f2e571562df0-builder-dockercfg-jd52j-push" (OuterVolumeSpecName: "builder-dockercfg-jd52j-push") pod "45b0bbe8-2222-47ff-b23d-f2e571562df0" (UID: "45b0bbe8-2222-47ff-b23d-f2e571562df0"). InnerVolumeSpecName "builder-dockercfg-jd52j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.535011 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "45b0bbe8-2222-47ff-b23d-f2e571562df0" (UID: "45b0bbe8-2222-47ff-b23d-f2e571562df0"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.585518 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97rtz\" (UniqueName: \"kubernetes.io/projected/45b0bbe8-2222-47ff-b23d-f2e571562df0-kube-api-access-97rtz\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.585564 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.585578 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.585591 4947 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.585603 4947 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.585617 4947 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.585630 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/45b0bbe8-2222-47ff-b23d-f2e571562df0-builder-dockercfg-jd52j-push\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.585643 4947 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.585655 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/45b0bbe8-2222-47ff-b23d-f2e571562df0-builder-dockercfg-jd52j-pull\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.585666 4947 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.768277 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_45b0bbe8-2222-47ff-b23d-f2e571562df0/docker-build/0.log" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.769306 4947 generic.go:334] "Generic (PLEG): container finished" podID="45b0bbe8-2222-47ff-b23d-f2e571562df0" containerID="10cfbfd273a389f2d3bc2bd418daae0276661ae26d3ea485d785387716756cc1" exitCode=1 Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.769371 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"45b0bbe8-2222-47ff-b23d-f2e571562df0","Type":"ContainerDied","Data":"10cfbfd273a389f2d3bc2bd418daae0276661ae26d3ea485d785387716756cc1"} Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.769422 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"45b0bbe8-2222-47ff-b23d-f2e571562df0","Type":"ContainerDied","Data":"26747227c318ee56f01386f2aef39290d6a1c144b6fa6c040fda3c0ee559d715"} Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.769446 4947 scope.go:117] "RemoveContainer" containerID="10cfbfd273a389f2d3bc2bd418daae0276661ae26d3ea485d785387716756cc1" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.769459 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.828754 4947 scope.go:117] "RemoveContainer" containerID="d07fd80933ae56a5d55a4b6d74e59410bbcab1eb969ab1d509535830b3547eb5" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.837298 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.850861 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.865851 4947 scope.go:117] "RemoveContainer" containerID="10cfbfd273a389f2d3bc2bd418daae0276661ae26d3ea485d785387716756cc1" Jan 25 00:32:52 crc kubenswrapper[4947]: E0125 00:32:52.866457 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10cfbfd273a389f2d3bc2bd418daae0276661ae26d3ea485d785387716756cc1\": container with ID starting with 10cfbfd273a389f2d3bc2bd418daae0276661ae26d3ea485d785387716756cc1 not found: ID does not exist" containerID="10cfbfd273a389f2d3bc2bd418daae0276661ae26d3ea485d785387716756cc1" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.866498 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10cfbfd273a389f2d3bc2bd418daae0276661ae26d3ea485d785387716756cc1"} err="failed to get container status \"10cfbfd273a389f2d3bc2bd418daae0276661ae26d3ea485d785387716756cc1\": rpc error: code = NotFound desc = could not find container \"10cfbfd273a389f2d3bc2bd418daae0276661ae26d3ea485d785387716756cc1\": container with ID starting with 10cfbfd273a389f2d3bc2bd418daae0276661ae26d3ea485d785387716756cc1 not found: ID does not exist" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.866525 4947 scope.go:117] "RemoveContainer" containerID="d07fd80933ae56a5d55a4b6d74e59410bbcab1eb969ab1d509535830b3547eb5" Jan 25 00:32:52 crc kubenswrapper[4947]: E0125 00:32:52.867191 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d07fd80933ae56a5d55a4b6d74e59410bbcab1eb969ab1d509535830b3547eb5\": container with ID starting with d07fd80933ae56a5d55a4b6d74e59410bbcab1eb969ab1d509535830b3547eb5 not found: ID does not exist" containerID="d07fd80933ae56a5d55a4b6d74e59410bbcab1eb969ab1d509535830b3547eb5" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.867270 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d07fd80933ae56a5d55a4b6d74e59410bbcab1eb969ab1d509535830b3547eb5"} err="failed to get container status \"d07fd80933ae56a5d55a4b6d74e59410bbcab1eb969ab1d509535830b3547eb5\": rpc error: code = NotFound desc = could not find container \"d07fd80933ae56a5d55a4b6d74e59410bbcab1eb969ab1d509535830b3547eb5\": container with ID starting with d07fd80933ae56a5d55a4b6d74e59410bbcab1eb969ab1d509535830b3547eb5 not found: ID does not exist" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.106852 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45b0bbe8-2222-47ff-b23d-f2e571562df0" path="/var/lib/kubelet/pods/45b0bbe8-2222-47ff-b23d-f2e571562df0/volumes" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.495107 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 25 00:32:53 crc kubenswrapper[4947]: E0125 00:32:53.495423 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45b0bbe8-2222-47ff-b23d-f2e571562df0" containerName="manage-dockerfile" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.495440 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="45b0bbe8-2222-47ff-b23d-f2e571562df0" containerName="manage-dockerfile" Jan 25 00:32:53 crc kubenswrapper[4947]: E0125 00:32:53.495455 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70732187-2d74-4e3a-9e95-79c235f70b3b" containerName="extract-utilities" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.495463 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="70732187-2d74-4e3a-9e95-79c235f70b3b" containerName="extract-utilities" Jan 25 00:32:53 crc kubenswrapper[4947]: E0125 00:32:53.495473 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af43bb2d-e763-416d-803d-16fec7332771" containerName="extract-content" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.495481 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="af43bb2d-e763-416d-803d-16fec7332771" containerName="extract-content" Jan 25 00:32:53 crc kubenswrapper[4947]: E0125 00:32:53.495492 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af43bb2d-e763-416d-803d-16fec7332771" containerName="registry-server" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.495500 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="af43bb2d-e763-416d-803d-16fec7332771" containerName="registry-server" Jan 25 00:32:53 crc kubenswrapper[4947]: E0125 00:32:53.495507 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45b0bbe8-2222-47ff-b23d-f2e571562df0" containerName="docker-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.495515 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="45b0bbe8-2222-47ff-b23d-f2e571562df0" containerName="docker-build" Jan 25 00:32:53 crc kubenswrapper[4947]: E0125 00:32:53.495527 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af43bb2d-e763-416d-803d-16fec7332771" containerName="extract-utilities" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.495535 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="af43bb2d-e763-416d-803d-16fec7332771" containerName="extract-utilities" Jan 25 00:32:53 crc kubenswrapper[4947]: E0125 00:32:53.495550 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70732187-2d74-4e3a-9e95-79c235f70b3b" containerName="registry-server" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.495557 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="70732187-2d74-4e3a-9e95-79c235f70b3b" containerName="registry-server" Jan 25 00:32:53 crc kubenswrapper[4947]: E0125 00:32:53.495569 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70732187-2d74-4e3a-9e95-79c235f70b3b" containerName="extract-content" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.495576 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="70732187-2d74-4e3a-9e95-79c235f70b3b" containerName="extract-content" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.495701 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="af43bb2d-e763-416d-803d-16fec7332771" containerName="registry-server" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.495715 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="45b0bbe8-2222-47ff-b23d-f2e571562df0" containerName="docker-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.495734 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="70732187-2d74-4e3a-9e95-79c235f70b3b" containerName="registry-server" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.496794 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.504442 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-jd52j" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.504820 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-sys-config" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.504996 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-ca" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.505064 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-global-ca" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.512496 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.599315 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.599425 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.599470 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/1401febc-016f-410e-83fb-dd8e2eaead5e-builder-dockercfg-jd52j-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.599523 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.599552 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvdcj\" (UniqueName: \"kubernetes.io/projected/1401febc-016f-410e-83fb-dd8e2eaead5e-kube-api-access-jvdcj\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.599664 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/1401febc-016f-410e-83fb-dd8e2eaead5e-builder-dockercfg-jd52j-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.599735 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.599771 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1401febc-016f-410e-83fb-dd8e2eaead5e-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.599813 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1401febc-016f-410e-83fb-dd8e2eaead5e-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.599862 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1401febc-016f-410e-83fb-dd8e2eaead5e-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.599883 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1401febc-016f-410e-83fb-dd8e2eaead5e-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.599939 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1401febc-016f-410e-83fb-dd8e2eaead5e-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.701713 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1401febc-016f-410e-83fb-dd8e2eaead5e-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.701831 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1401febc-016f-410e-83fb-dd8e2eaead5e-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.701871 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1401febc-016f-410e-83fb-dd8e2eaead5e-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.701894 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1401febc-016f-410e-83fb-dd8e2eaead5e-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.701918 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1401febc-016f-410e-83fb-dd8e2eaead5e-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.702489 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1401febc-016f-410e-83fb-dd8e2eaead5e-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.702545 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.703029 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.703108 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/1401febc-016f-410e-83fb-dd8e2eaead5e-builder-dockercfg-jd52j-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.703195 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.703220 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.703253 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvdcj\" (UniqueName: \"kubernetes.io/projected/1401febc-016f-410e-83fb-dd8e2eaead5e-kube-api-access-jvdcj\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.703301 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/1401febc-016f-410e-83fb-dd8e2eaead5e-builder-dockercfg-jd52j-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.703337 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.703352 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1401febc-016f-410e-83fb-dd8e2eaead5e-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.703377 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1401febc-016f-410e-83fb-dd8e2eaead5e-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.703749 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.703931 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1401febc-016f-410e-83fb-dd8e2eaead5e-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.704207 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1401febc-016f-410e-83fb-dd8e2eaead5e-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.704164 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.704380 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.708459 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/1401febc-016f-410e-83fb-dd8e2eaead5e-builder-dockercfg-jd52j-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.709235 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/1401febc-016f-410e-83fb-dd8e2eaead5e-builder-dockercfg-jd52j-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.723098 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvdcj\" (UniqueName: \"kubernetes.io/projected/1401febc-016f-410e-83fb-dd8e2eaead5e-kube-api-access-jvdcj\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.816585 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:54 crc kubenswrapper[4947]: I0125 00:32:54.331389 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 25 00:32:54 crc kubenswrapper[4947]: I0125 00:32:54.791279 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"1401febc-016f-410e-83fb-dd8e2eaead5e","Type":"ContainerStarted","Data":"12a4a876b1c059d28d4620add3f0e9fdabe33bc5bab3e19249ea10cf4e371664"} Jan 25 00:32:54 crc kubenswrapper[4947]: I0125 00:32:54.791747 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"1401febc-016f-410e-83fb-dd8e2eaead5e","Type":"ContainerStarted","Data":"6d75a8acc54f852c0df652a21678fabe1ea78a90eda755e5dd6934eaed7a164f"} Jan 25 00:32:55 crc kubenswrapper[4947]: I0125 00:32:55.800562 4947 generic.go:334] "Generic (PLEG): container finished" podID="1401febc-016f-410e-83fb-dd8e2eaead5e" containerID="12a4a876b1c059d28d4620add3f0e9fdabe33bc5bab3e19249ea10cf4e371664" exitCode=0 Jan 25 00:32:55 crc kubenswrapper[4947]: I0125 00:32:55.800622 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"1401febc-016f-410e-83fb-dd8e2eaead5e","Type":"ContainerDied","Data":"12a4a876b1c059d28d4620add3f0e9fdabe33bc5bab3e19249ea10cf4e371664"} Jan 25 00:32:56 crc kubenswrapper[4947]: I0125 00:32:56.812438 4947 generic.go:334] "Generic (PLEG): container finished" podID="1401febc-016f-410e-83fb-dd8e2eaead5e" containerID="1fa4cda2ff1aafe3e40d8f14aab93a93a41bce1589d482eaee445fe7dda68118" exitCode=0 Jan 25 00:32:56 crc kubenswrapper[4947]: I0125 00:32:56.812541 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"1401febc-016f-410e-83fb-dd8e2eaead5e","Type":"ContainerDied","Data":"1fa4cda2ff1aafe3e40d8f14aab93a93a41bce1589d482eaee445fe7dda68118"} Jan 25 00:32:56 crc kubenswrapper[4947]: I0125 00:32:56.869024 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_1401febc-016f-410e-83fb-dd8e2eaead5e/manage-dockerfile/0.log" Jan 25 00:32:57 crc kubenswrapper[4947]: I0125 00:32:57.826310 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"1401febc-016f-410e-83fb-dd8e2eaead5e","Type":"ContainerStarted","Data":"c65f939a7d3cf1486e929917d5a7eac5a1d5bb6038d4ad4228acb5ede1bb53de"} Jan 25 00:32:57 crc kubenswrapper[4947]: I0125 00:32:57.887412 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-2-build" podStartSLOduration=4.887332311 podStartE2EDuration="4.887332311s" podCreationTimestamp="2026-01-25 00:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:32:57.872913926 +0000 UTC m=+1417.105904396" watchObservedRunningTime="2026-01-25 00:32:57.887332311 +0000 UTC m=+1417.120322821" Jan 25 00:33:53 crc kubenswrapper[4947]: I0125 00:33:53.285245 4947 generic.go:334] "Generic (PLEG): container finished" podID="1401febc-016f-410e-83fb-dd8e2eaead5e" containerID="c65f939a7d3cf1486e929917d5a7eac5a1d5bb6038d4ad4228acb5ede1bb53de" exitCode=0 Jan 25 00:33:53 crc kubenswrapper[4947]: I0125 00:33:53.285332 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"1401febc-016f-410e-83fb-dd8e2eaead5e","Type":"ContainerDied","Data":"c65f939a7d3cf1486e929917d5a7eac5a1d5bb6038d4ad4228acb5ede1bb53de"} Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.627916 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.695690 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/1401febc-016f-410e-83fb-dd8e2eaead5e-builder-dockercfg-jd52j-pull\") pod \"1401febc-016f-410e-83fb-dd8e2eaead5e\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.695737 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1401febc-016f-410e-83fb-dd8e2eaead5e-build-system-configs\") pod \"1401febc-016f-410e-83fb-dd8e2eaead5e\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.695768 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-container-storage-root\") pod \"1401febc-016f-410e-83fb-dd8e2eaead5e\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.695815 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1401febc-016f-410e-83fb-dd8e2eaead5e-build-ca-bundles\") pod \"1401febc-016f-410e-83fb-dd8e2eaead5e\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.695834 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1401febc-016f-410e-83fb-dd8e2eaead5e-build-proxy-ca-bundles\") pod \"1401febc-016f-410e-83fb-dd8e2eaead5e\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.695856 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1401febc-016f-410e-83fb-dd8e2eaead5e-buildcachedir\") pod \"1401febc-016f-410e-83fb-dd8e2eaead5e\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.695904 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/1401febc-016f-410e-83fb-dd8e2eaead5e-builder-dockercfg-jd52j-push\") pod \"1401febc-016f-410e-83fb-dd8e2eaead5e\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.695934 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1401febc-016f-410e-83fb-dd8e2eaead5e-node-pullsecrets\") pod \"1401febc-016f-410e-83fb-dd8e2eaead5e\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.695969 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-buildworkdir\") pod \"1401febc-016f-410e-83fb-dd8e2eaead5e\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.695992 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvdcj\" (UniqueName: \"kubernetes.io/projected/1401febc-016f-410e-83fb-dd8e2eaead5e-kube-api-access-jvdcj\") pod \"1401febc-016f-410e-83fb-dd8e2eaead5e\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.696010 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-container-storage-run\") pod \"1401febc-016f-410e-83fb-dd8e2eaead5e\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.696027 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-build-blob-cache\") pod \"1401febc-016f-410e-83fb-dd8e2eaead5e\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.696295 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1401febc-016f-410e-83fb-dd8e2eaead5e-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "1401febc-016f-410e-83fb-dd8e2eaead5e" (UID: "1401febc-016f-410e-83fb-dd8e2eaead5e"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.696802 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1401febc-016f-410e-83fb-dd8e2eaead5e-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "1401febc-016f-410e-83fb-dd8e2eaead5e" (UID: "1401febc-016f-410e-83fb-dd8e2eaead5e"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.697243 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1401febc-016f-410e-83fb-dd8e2eaead5e-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "1401febc-016f-410e-83fb-dd8e2eaead5e" (UID: "1401febc-016f-410e-83fb-dd8e2eaead5e"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.697629 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1401febc-016f-410e-83fb-dd8e2eaead5e-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "1401febc-016f-410e-83fb-dd8e2eaead5e" (UID: "1401febc-016f-410e-83fb-dd8e2eaead5e"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.697837 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1401febc-016f-410e-83fb-dd8e2eaead5e-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "1401febc-016f-410e-83fb-dd8e2eaead5e" (UID: "1401febc-016f-410e-83fb-dd8e2eaead5e"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.698386 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "1401febc-016f-410e-83fb-dd8e2eaead5e" (UID: "1401febc-016f-410e-83fb-dd8e2eaead5e"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.699334 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "1401febc-016f-410e-83fb-dd8e2eaead5e" (UID: "1401febc-016f-410e-83fb-dd8e2eaead5e"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.701270 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1401febc-016f-410e-83fb-dd8e2eaead5e-builder-dockercfg-jd52j-push" (OuterVolumeSpecName: "builder-dockercfg-jd52j-push") pod "1401febc-016f-410e-83fb-dd8e2eaead5e" (UID: "1401febc-016f-410e-83fb-dd8e2eaead5e"). InnerVolumeSpecName "builder-dockercfg-jd52j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.702447 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1401febc-016f-410e-83fb-dd8e2eaead5e-kube-api-access-jvdcj" (OuterVolumeSpecName: "kube-api-access-jvdcj") pod "1401febc-016f-410e-83fb-dd8e2eaead5e" (UID: "1401febc-016f-410e-83fb-dd8e2eaead5e"). InnerVolumeSpecName "kube-api-access-jvdcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.708286 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1401febc-016f-410e-83fb-dd8e2eaead5e-builder-dockercfg-jd52j-pull" (OuterVolumeSpecName: "builder-dockercfg-jd52j-pull") pod "1401febc-016f-410e-83fb-dd8e2eaead5e" (UID: "1401febc-016f-410e-83fb-dd8e2eaead5e"). InnerVolumeSpecName "builder-dockercfg-jd52j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.797612 4947 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.797927 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvdcj\" (UniqueName: \"kubernetes.io/projected/1401febc-016f-410e-83fb-dd8e2eaead5e-kube-api-access-jvdcj\") on node \"crc\" DevicePath \"\"" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.797948 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.797959 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/1401febc-016f-410e-83fb-dd8e2eaead5e-builder-dockercfg-jd52j-pull\") on node \"crc\" DevicePath \"\"" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.797970 4947 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1401febc-016f-410e-83fb-dd8e2eaead5e-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.797984 4947 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1401febc-016f-410e-83fb-dd8e2eaead5e-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.798035 4947 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1401febc-016f-410e-83fb-dd8e2eaead5e-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.798048 4947 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1401febc-016f-410e-83fb-dd8e2eaead5e-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.798060 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/1401febc-016f-410e-83fb-dd8e2eaead5e-builder-dockercfg-jd52j-push\") on node \"crc\" DevicePath \"\"" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.798073 4947 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1401febc-016f-410e-83fb-dd8e2eaead5e-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.799604 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "1401febc-016f-410e-83fb-dd8e2eaead5e" (UID: "1401febc-016f-410e-83fb-dd8e2eaead5e"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.899846 4947 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 25 00:33:55 crc kubenswrapper[4947]: I0125 00:33:55.304510 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"1401febc-016f-410e-83fb-dd8e2eaead5e","Type":"ContainerDied","Data":"6d75a8acc54f852c0df652a21678fabe1ea78a90eda755e5dd6934eaed7a164f"} Jan 25 00:33:55 crc kubenswrapper[4947]: I0125 00:33:55.304553 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d75a8acc54f852c0df652a21678fabe1ea78a90eda755e5dd6934eaed7a164f" Jan 25 00:33:55 crc kubenswrapper[4947]: I0125 00:33:55.304580 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:33:55 crc kubenswrapper[4947]: I0125 00:33:55.593689 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "1401febc-016f-410e-83fb-dd8e2eaead5e" (UID: "1401febc-016f-410e-83fb-dd8e2eaead5e"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:33:55 crc kubenswrapper[4947]: I0125 00:33:55.609618 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 25 00:34:00 crc kubenswrapper[4947]: I0125 00:34:00.281357 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-859d6d5949-k28x7"] Jan 25 00:34:00 crc kubenswrapper[4947]: E0125 00:34:00.282063 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1401febc-016f-410e-83fb-dd8e2eaead5e" containerName="git-clone" Jan 25 00:34:00 crc kubenswrapper[4947]: I0125 00:34:00.282084 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="1401febc-016f-410e-83fb-dd8e2eaead5e" containerName="git-clone" Jan 25 00:34:00 crc kubenswrapper[4947]: E0125 00:34:00.282105 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1401febc-016f-410e-83fb-dd8e2eaead5e" containerName="manage-dockerfile" Jan 25 00:34:00 crc kubenswrapper[4947]: I0125 00:34:00.282122 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="1401febc-016f-410e-83fb-dd8e2eaead5e" containerName="manage-dockerfile" Jan 25 00:34:00 crc kubenswrapper[4947]: E0125 00:34:00.282192 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1401febc-016f-410e-83fb-dd8e2eaead5e" containerName="docker-build" Jan 25 00:34:00 crc kubenswrapper[4947]: I0125 00:34:00.282205 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="1401febc-016f-410e-83fb-dd8e2eaead5e" containerName="docker-build" Jan 25 00:34:00 crc kubenswrapper[4947]: I0125 00:34:00.282413 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="1401febc-016f-410e-83fb-dd8e2eaead5e" containerName="docker-build" Jan 25 00:34:00 crc kubenswrapper[4947]: I0125 00:34:00.283079 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-859d6d5949-k28x7" Jan 25 00:34:00 crc kubenswrapper[4947]: I0125 00:34:00.285648 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-operator-dockercfg-tsdlr" Jan 25 00:34:00 crc kubenswrapper[4947]: I0125 00:34:00.303000 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-859d6d5949-k28x7"] Jan 25 00:34:00 crc kubenswrapper[4947]: I0125 00:34:00.377862 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbsz8\" (UniqueName: \"kubernetes.io/projected/043c18a1-e602-4917-b73d-5331da5ee62f-kube-api-access-sbsz8\") pod \"smart-gateway-operator-859d6d5949-k28x7\" (UID: \"043c18a1-e602-4917-b73d-5331da5ee62f\") " pod="service-telemetry/smart-gateway-operator-859d6d5949-k28x7" Jan 25 00:34:00 crc kubenswrapper[4947]: I0125 00:34:00.377978 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/043c18a1-e602-4917-b73d-5331da5ee62f-runner\") pod \"smart-gateway-operator-859d6d5949-k28x7\" (UID: \"043c18a1-e602-4917-b73d-5331da5ee62f\") " pod="service-telemetry/smart-gateway-operator-859d6d5949-k28x7" Jan 25 00:34:00 crc kubenswrapper[4947]: I0125 00:34:00.479489 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/043c18a1-e602-4917-b73d-5331da5ee62f-runner\") pod \"smart-gateway-operator-859d6d5949-k28x7\" (UID: \"043c18a1-e602-4917-b73d-5331da5ee62f\") " pod="service-telemetry/smart-gateway-operator-859d6d5949-k28x7" Jan 25 00:34:00 crc kubenswrapper[4947]: I0125 00:34:00.479661 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbsz8\" (UniqueName: \"kubernetes.io/projected/043c18a1-e602-4917-b73d-5331da5ee62f-kube-api-access-sbsz8\") pod \"smart-gateway-operator-859d6d5949-k28x7\" (UID: \"043c18a1-e602-4917-b73d-5331da5ee62f\") " pod="service-telemetry/smart-gateway-operator-859d6d5949-k28x7" Jan 25 00:34:00 crc kubenswrapper[4947]: I0125 00:34:00.480544 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/043c18a1-e602-4917-b73d-5331da5ee62f-runner\") pod \"smart-gateway-operator-859d6d5949-k28x7\" (UID: \"043c18a1-e602-4917-b73d-5331da5ee62f\") " pod="service-telemetry/smart-gateway-operator-859d6d5949-k28x7" Jan 25 00:34:00 crc kubenswrapper[4947]: I0125 00:34:00.502426 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbsz8\" (UniqueName: \"kubernetes.io/projected/043c18a1-e602-4917-b73d-5331da5ee62f-kube-api-access-sbsz8\") pod \"smart-gateway-operator-859d6d5949-k28x7\" (UID: \"043c18a1-e602-4917-b73d-5331da5ee62f\") " pod="service-telemetry/smart-gateway-operator-859d6d5949-k28x7" Jan 25 00:34:00 crc kubenswrapper[4947]: I0125 00:34:00.611617 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-859d6d5949-k28x7" Jan 25 00:34:01 crc kubenswrapper[4947]: I0125 00:34:01.026017 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-859d6d5949-k28x7"] Jan 25 00:34:01 crc kubenswrapper[4947]: W0125 00:34:01.035316 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod043c18a1_e602_4917_b73d_5331da5ee62f.slice/crio-f5d9097adf7dcd2c64f407e053119c0e30869753ea7dd53a52ad783ee6d7d90d WatchSource:0}: Error finding container f5d9097adf7dcd2c64f407e053119c0e30869753ea7dd53a52ad783ee6d7d90d: Status 404 returned error can't find the container with id f5d9097adf7dcd2c64f407e053119c0e30869753ea7dd53a52ad783ee6d7d90d Jan 25 00:34:01 crc kubenswrapper[4947]: I0125 00:34:01.357541 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-859d6d5949-k28x7" event={"ID":"043c18a1-e602-4917-b73d-5331da5ee62f","Type":"ContainerStarted","Data":"f5d9097adf7dcd2c64f407e053119c0e30869753ea7dd53a52ad783ee6d7d90d"} Jan 25 00:34:06 crc kubenswrapper[4947]: I0125 00:34:06.347958 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-7b5f5cc44d-grff4"] Jan 25 00:34:06 crc kubenswrapper[4947]: I0125 00:34:06.351515 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-7b5f5cc44d-grff4" Jan 25 00:34:06 crc kubenswrapper[4947]: I0125 00:34:06.354009 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-operator-dockercfg-lz5n5" Jan 25 00:34:06 crc kubenswrapper[4947]: I0125 00:34:06.364226 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-7b5f5cc44d-grff4"] Jan 25 00:34:06 crc kubenswrapper[4947]: I0125 00:34:06.374464 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq9cv\" (UniqueName: \"kubernetes.io/projected/ccea2ce4-d212-4599-a152-5a2d53366128-kube-api-access-tq9cv\") pod \"service-telemetry-operator-7b5f5cc44d-grff4\" (UID: \"ccea2ce4-d212-4599-a152-5a2d53366128\") " pod="service-telemetry/service-telemetry-operator-7b5f5cc44d-grff4" Jan 25 00:34:06 crc kubenswrapper[4947]: I0125 00:34:06.375548 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/ccea2ce4-d212-4599-a152-5a2d53366128-runner\") pod \"service-telemetry-operator-7b5f5cc44d-grff4\" (UID: \"ccea2ce4-d212-4599-a152-5a2d53366128\") " pod="service-telemetry/service-telemetry-operator-7b5f5cc44d-grff4" Jan 25 00:34:06 crc kubenswrapper[4947]: I0125 00:34:06.477079 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/ccea2ce4-d212-4599-a152-5a2d53366128-runner\") pod \"service-telemetry-operator-7b5f5cc44d-grff4\" (UID: \"ccea2ce4-d212-4599-a152-5a2d53366128\") " pod="service-telemetry/service-telemetry-operator-7b5f5cc44d-grff4" Jan 25 00:34:06 crc kubenswrapper[4947]: I0125 00:34:06.477213 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq9cv\" (UniqueName: \"kubernetes.io/projected/ccea2ce4-d212-4599-a152-5a2d53366128-kube-api-access-tq9cv\") pod \"service-telemetry-operator-7b5f5cc44d-grff4\" (UID: \"ccea2ce4-d212-4599-a152-5a2d53366128\") " pod="service-telemetry/service-telemetry-operator-7b5f5cc44d-grff4" Jan 25 00:34:06 crc kubenswrapper[4947]: I0125 00:34:06.477848 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/ccea2ce4-d212-4599-a152-5a2d53366128-runner\") pod \"service-telemetry-operator-7b5f5cc44d-grff4\" (UID: \"ccea2ce4-d212-4599-a152-5a2d53366128\") " pod="service-telemetry/service-telemetry-operator-7b5f5cc44d-grff4" Jan 25 00:34:06 crc kubenswrapper[4947]: I0125 00:34:06.499788 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq9cv\" (UniqueName: \"kubernetes.io/projected/ccea2ce4-d212-4599-a152-5a2d53366128-kube-api-access-tq9cv\") pod \"service-telemetry-operator-7b5f5cc44d-grff4\" (UID: \"ccea2ce4-d212-4599-a152-5a2d53366128\") " pod="service-telemetry/service-telemetry-operator-7b5f5cc44d-grff4" Jan 25 00:34:06 crc kubenswrapper[4947]: I0125 00:34:06.671437 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-7b5f5cc44d-grff4" Jan 25 00:34:12 crc kubenswrapper[4947]: I0125 00:34:12.918576 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-7b5f5cc44d-grff4"] Jan 25 00:34:13 crc kubenswrapper[4947]: I0125 00:34:13.479396 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-7b5f5cc44d-grff4" event={"ID":"ccea2ce4-d212-4599-a152-5a2d53366128","Type":"ContainerStarted","Data":"125a40cb0040bffb623a7e38be48fd96835452f5a833fa6e3d95907548d87645"} Jan 25 00:34:16 crc kubenswrapper[4947]: E0125 00:34:16.416483 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/infrawatch/smart-gateway-operator:stable-1.5" Jan 25 00:34:16 crc kubenswrapper[4947]: E0125 00:34:16.416855 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/infrawatch/smart-gateway-operator:stable-1.5,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.annotations['olm.targetNamespaces'],},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:smart-gateway-operator,ValueFrom:nil,},EnvVar{Name:ANSIBLE_GATHERING,Value:explicit,ValueFrom:nil,},EnvVar{Name:ANSIBLE_VERBOSITY_SMARTGATEWAY_SMARTGATEWAY_INFRA_WATCH,Value:4,ValueFrom:nil,},EnvVar{Name:ANSIBLE_DEBUG_LOGS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CORE_SMARTGATEWAY_IMAGE,Value:image-registry.openshift-image-registry.svc:5000/service-telemetry/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BRIDGE_SMARTGATEWAY_IMAGE,Value:image-registry.openshift-image-registry.svc:5000/service-telemetry/sg-bridge:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OAUTH_PROXY_IMAGE,Value:quay.io/openshift/origin-oauth-proxy:latest,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:smart-gateway-operator.v5.0.1769301236,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:runner,ReadOnly:false,MountPath:/tmp/ansible-operator/runner,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sbsz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod smart-gateway-operator-859d6d5949-k28x7_service-telemetry(043c18a1-e602-4917-b73d-5331da5ee62f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 25 00:34:16 crc kubenswrapper[4947]: E0125 00:34:16.418057 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/smart-gateway-operator-859d6d5949-k28x7" podUID="043c18a1-e602-4917-b73d-5331da5ee62f" Jan 25 00:34:16 crc kubenswrapper[4947]: E0125 00:34:16.501323 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/smart-gateway-operator:stable-1.5\\\"\"" pod="service-telemetry/smart-gateway-operator-859d6d5949-k28x7" podUID="043c18a1-e602-4917-b73d-5331da5ee62f" Jan 25 00:34:17 crc kubenswrapper[4947]: I0125 00:34:17.073145 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:34:17 crc kubenswrapper[4947]: I0125 00:34:17.073210 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:34:21 crc kubenswrapper[4947]: I0125 00:34:21.550969 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-7b5f5cc44d-grff4" event={"ID":"ccea2ce4-d212-4599-a152-5a2d53366128","Type":"ContainerStarted","Data":"8aeab85df7d9b6543e8fe54d28dc82c81dcbc638388c934892da45eb5e34af11"} Jan 25 00:34:21 crc kubenswrapper[4947]: I0125 00:34:21.628414 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-7b5f5cc44d-grff4" podStartSLOduration=8.095226514 podStartE2EDuration="15.628391232s" podCreationTimestamp="2026-01-25 00:34:06 +0000 UTC" firstStartedPulling="2026-01-25 00:34:13.355257393 +0000 UTC m=+1492.588247843" lastFinishedPulling="2026-01-25 00:34:20.888422121 +0000 UTC m=+1500.121412561" observedRunningTime="2026-01-25 00:34:21.624060448 +0000 UTC m=+1500.857050898" watchObservedRunningTime="2026-01-25 00:34:21.628391232 +0000 UTC m=+1500.861381672" Jan 25 00:34:31 crc kubenswrapper[4947]: I0125 00:34:31.628944 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-859d6d5949-k28x7" event={"ID":"043c18a1-e602-4917-b73d-5331da5ee62f","Type":"ContainerStarted","Data":"781fae171b2127274dfdc7210acab1db35fe3f1b8ff5f0595696d270087274c4"} Jan 25 00:34:31 crc kubenswrapper[4947]: I0125 00:34:31.670240 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-859d6d5949-k28x7" podStartSLOduration=2.088789645 podStartE2EDuration="31.670221045s" podCreationTimestamp="2026-01-25 00:34:00 +0000 UTC" firstStartedPulling="2026-01-25 00:34:01.038971618 +0000 UTC m=+1480.271962058" lastFinishedPulling="2026-01-25 00:34:30.620403018 +0000 UTC m=+1509.853393458" observedRunningTime="2026-01-25 00:34:31.663834051 +0000 UTC m=+1510.896824501" watchObservedRunningTime="2026-01-25 00:34:31.670221045 +0000 UTC m=+1510.903211485" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.197874 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-pvgb8"] Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.199841 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.202112 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-ca" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.202397 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-users" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.202526 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-dockercfg-ktwfn" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.202647 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-ca" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.204385 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-credentials" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.204859 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-interconnect-sasl-config" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.207344 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-credentials" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.215619 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-pvgb8"] Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.359320 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.359385 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.359404 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-sasl-users\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.359424 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.359509 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/7b062645-435c-4461-93cd-8cbe7cd8e733-sasl-config\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.359582 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vf7l\" (UniqueName: \"kubernetes.io/projected/7b062645-435c-4461-93cd-8cbe7cd8e733-kube-api-access-5vf7l\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.359608 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.461045 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/7b062645-435c-4461-93cd-8cbe7cd8e733-sasl-config\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.461182 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vf7l\" (UniqueName: \"kubernetes.io/projected/7b062645-435c-4461-93cd-8cbe7cd8e733-kube-api-access-5vf7l\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.461242 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.461285 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.461346 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.461382 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-sasl-users\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.461418 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.462551 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/7b062645-435c-4461-93cd-8cbe7cd8e733-sasl-config\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.468587 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.469235 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.481540 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-sasl-users\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.484543 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.486025 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.506643 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vf7l\" (UniqueName: \"kubernetes.io/projected/7b062645-435c-4461-93cd-8cbe7cd8e733-kube-api-access-5vf7l\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.550650 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.972464 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-pvgb8"] Jan 25 00:34:46 crc kubenswrapper[4947]: I0125 00:34:46.735575 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" event={"ID":"7b062645-435c-4461-93cd-8cbe7cd8e733","Type":"ContainerStarted","Data":"1e919ed0615f4f4827a60cadd255f60dc5f4870e6c12a5fe6e5916224ada10bc"} Jan 25 00:34:47 crc kubenswrapper[4947]: I0125 00:34:47.072640 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:34:47 crc kubenswrapper[4947]: I0125 00:34:47.072701 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:34:54 crc kubenswrapper[4947]: I0125 00:34:54.796045 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" event={"ID":"7b062645-435c-4461-93cd-8cbe7cd8e733","Type":"ContainerStarted","Data":"99188ef6337d790fc8a682f69dd76728f1c088acbf4f5174dab2cdba95af3406"} Jan 25 00:34:54 crc kubenswrapper[4947]: I0125 00:34:54.821035 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" podStartSLOduration=1.222919649 podStartE2EDuration="9.821006657s" podCreationTimestamp="2026-01-25 00:34:45 +0000 UTC" firstStartedPulling="2026-01-25 00:34:45.992858811 +0000 UTC m=+1525.225849261" lastFinishedPulling="2026-01-25 00:34:54.590945829 +0000 UTC m=+1533.823936269" observedRunningTime="2026-01-25 00:34:54.815202878 +0000 UTC m=+1534.048193318" watchObservedRunningTime="2026-01-25 00:34:54.821006657 +0000 UTC m=+1534.053997097" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.472491 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.475644 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.478458 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-stf-dockercfg-4vrfm" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.479454 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-session-secret" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.479733 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-2" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.479831 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-1" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.479975 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"serving-certs-ca-bundle" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.480164 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-prometheus-proxy-tls" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.481363 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-tls-assets-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.482823 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-web-config" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.482887 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.488350 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.491483 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.667409 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3d82b42a-6236-43af-8190-d28e96b2b933-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.667536 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3d82b42a-6236-43af-8190-d28e96b2b933-config-out\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.667636 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d82b42a-6236-43af-8190-d28e96b2b933-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.667754 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d87f148a-2eeb-416b-ae9b-e7d61d2c7097\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d87f148a-2eeb-416b-ae9b-e7d61d2c7097\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.667798 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w22zj\" (UniqueName: \"kubernetes.io/projected/3d82b42a-6236-43af-8190-d28e96b2b933-kube-api-access-w22zj\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.667835 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3d82b42a-6236-43af-8190-d28e96b2b933-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.667875 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3d82b42a-6236-43af-8190-d28e96b2b933-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.667932 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d82b42a-6236-43af-8190-d28e96b2b933-config\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.667974 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3d82b42a-6236-43af-8190-d28e96b2b933-tls-assets\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.668005 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3d82b42a-6236-43af-8190-d28e96b2b933-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.668046 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3d82b42a-6236-43af-8190-d28e96b2b933-web-config\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.668091 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/3d82b42a-6236-43af-8190-d28e96b2b933-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.769104 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/3d82b42a-6236-43af-8190-d28e96b2b933-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.769165 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3d82b42a-6236-43af-8190-d28e96b2b933-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.769202 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3d82b42a-6236-43af-8190-d28e96b2b933-config-out\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.769233 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d82b42a-6236-43af-8190-d28e96b2b933-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.769257 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d87f148a-2eeb-416b-ae9b-e7d61d2c7097\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d87f148a-2eeb-416b-ae9b-e7d61d2c7097\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.769278 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w22zj\" (UniqueName: \"kubernetes.io/projected/3d82b42a-6236-43af-8190-d28e96b2b933-kube-api-access-w22zj\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.769293 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3d82b42a-6236-43af-8190-d28e96b2b933-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.769312 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3d82b42a-6236-43af-8190-d28e96b2b933-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.769338 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d82b42a-6236-43af-8190-d28e96b2b933-config\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.769357 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3d82b42a-6236-43af-8190-d28e96b2b933-tls-assets\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.769373 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3d82b42a-6236-43af-8190-d28e96b2b933-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.769392 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3d82b42a-6236-43af-8190-d28e96b2b933-web-config\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.770628 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3d82b42a-6236-43af-8190-d28e96b2b933-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: E0125 00:34:57.770841 4947 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Jan 25 00:34:57 crc kubenswrapper[4947]: E0125 00:34:57.770931 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d82b42a-6236-43af-8190-d28e96b2b933-secret-default-prometheus-proxy-tls podName:3d82b42a-6236-43af-8190-d28e96b2b933 nodeName:}" failed. No retries permitted until 2026-01-25 00:34:58.270909162 +0000 UTC m=+1537.503899692 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/3d82b42a-6236-43af-8190-d28e96b2b933-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "3d82b42a-6236-43af-8190-d28e96b2b933") : secret "default-prometheus-proxy-tls" not found Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.771490 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d82b42a-6236-43af-8190-d28e96b2b933-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.771586 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3d82b42a-6236-43af-8190-d28e96b2b933-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.773346 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3d82b42a-6236-43af-8190-d28e96b2b933-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.774901 4947 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.774955 4947 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d87f148a-2eeb-416b-ae9b-e7d61d2c7097\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d87f148a-2eeb-416b-ae9b-e7d61d2c7097\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f0750656d621eabcb0a2ba208da554ab8fdfeaf937caaa46473a227c8cb68ab8/globalmount\"" pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.776515 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3d82b42a-6236-43af-8190-d28e96b2b933-tls-assets\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.776939 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/3d82b42a-6236-43af-8190-d28e96b2b933-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.780039 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3d82b42a-6236-43af-8190-d28e96b2b933-config-out\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.780176 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3d82b42a-6236-43af-8190-d28e96b2b933-web-config\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.780810 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d82b42a-6236-43af-8190-d28e96b2b933-config\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.796717 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w22zj\" (UniqueName: \"kubernetes.io/projected/3d82b42a-6236-43af-8190-d28e96b2b933-kube-api-access-w22zj\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.829949 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d87f148a-2eeb-416b-ae9b-e7d61d2c7097\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d87f148a-2eeb-416b-ae9b-e7d61d2c7097\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:58 crc kubenswrapper[4947]: I0125 00:34:58.275531 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3d82b42a-6236-43af-8190-d28e96b2b933-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:58 crc kubenswrapper[4947]: E0125 00:34:58.275707 4947 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Jan 25 00:34:58 crc kubenswrapper[4947]: E0125 00:34:58.275802 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d82b42a-6236-43af-8190-d28e96b2b933-secret-default-prometheus-proxy-tls podName:3d82b42a-6236-43af-8190-d28e96b2b933 nodeName:}" failed. No retries permitted until 2026-01-25 00:34:59.275777785 +0000 UTC m=+1538.508768235 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/3d82b42a-6236-43af-8190-d28e96b2b933-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "3d82b42a-6236-43af-8190-d28e96b2b933") : secret "default-prometheus-proxy-tls" not found Jan 25 00:34:59 crc kubenswrapper[4947]: I0125 00:34:59.289784 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3d82b42a-6236-43af-8190-d28e96b2b933-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:59 crc kubenswrapper[4947]: I0125 00:34:59.295774 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3d82b42a-6236-43af-8190-d28e96b2b933-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:59 crc kubenswrapper[4947]: I0125 00:34:59.296159 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Jan 25 00:34:59 crc kubenswrapper[4947]: I0125 00:34:59.763738 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 25 00:34:59 crc kubenswrapper[4947]: W0125 00:34:59.785780 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d82b42a_6236_43af_8190_d28e96b2b933.slice/crio-886536085f67f9860c669e6a0659f8212d2d57325cc2b2791e75511cdff8bfba WatchSource:0}: Error finding container 886536085f67f9860c669e6a0659f8212d2d57325cc2b2791e75511cdff8bfba: Status 404 returned error can't find the container with id 886536085f67f9860c669e6a0659f8212d2d57325cc2b2791e75511cdff8bfba Jan 25 00:34:59 crc kubenswrapper[4947]: I0125 00:34:59.841336 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3d82b42a-6236-43af-8190-d28e96b2b933","Type":"ContainerStarted","Data":"886536085f67f9860c669e6a0659f8212d2d57325cc2b2791e75511cdff8bfba"} Jan 25 00:35:03 crc kubenswrapper[4947]: I0125 00:35:03.879172 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3d82b42a-6236-43af-8190-d28e96b2b933","Type":"ContainerStarted","Data":"a045c02ba900e38ed58a3c74b6d3dc84912734d51ba0245f945e11a4825116da"} Jan 25 00:35:07 crc kubenswrapper[4947]: I0125 00:35:07.133697 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-4gkbj"] Jan 25 00:35:07 crc kubenswrapper[4947]: I0125 00:35:07.134877 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6856cfb745-4gkbj" Jan 25 00:35:07 crc kubenswrapper[4947]: I0125 00:35:07.146637 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-4gkbj"] Jan 25 00:35:07 crc kubenswrapper[4947]: I0125 00:35:07.302148 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt5vj\" (UniqueName: \"kubernetes.io/projected/b6cfc9d0-598c-4149-8a38-ec02ced8d2b8-kube-api-access-nt5vj\") pod \"default-snmp-webhook-6856cfb745-4gkbj\" (UID: \"b6cfc9d0-598c-4149-8a38-ec02ced8d2b8\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-4gkbj" Jan 25 00:35:07 crc kubenswrapper[4947]: I0125 00:35:07.403838 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt5vj\" (UniqueName: \"kubernetes.io/projected/b6cfc9d0-598c-4149-8a38-ec02ced8d2b8-kube-api-access-nt5vj\") pod \"default-snmp-webhook-6856cfb745-4gkbj\" (UID: \"b6cfc9d0-598c-4149-8a38-ec02ced8d2b8\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-4gkbj" Jan 25 00:35:07 crc kubenswrapper[4947]: I0125 00:35:07.436022 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt5vj\" (UniqueName: \"kubernetes.io/projected/b6cfc9d0-598c-4149-8a38-ec02ced8d2b8-kube-api-access-nt5vj\") pod \"default-snmp-webhook-6856cfb745-4gkbj\" (UID: \"b6cfc9d0-598c-4149-8a38-ec02ced8d2b8\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-4gkbj" Jan 25 00:35:07 crc kubenswrapper[4947]: I0125 00:35:07.455871 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6856cfb745-4gkbj" Jan 25 00:35:07 crc kubenswrapper[4947]: I0125 00:35:07.898229 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-4gkbj"] Jan 25 00:35:07 crc kubenswrapper[4947]: W0125 00:35:07.905314 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6cfc9d0_598c_4149_8a38_ec02ced8d2b8.slice/crio-3400200c7144f00cbdb00e101b7366d32b06932b58fd8100cb7d507259dd1c9d WatchSource:0}: Error finding container 3400200c7144f00cbdb00e101b7366d32b06932b58fd8100cb7d507259dd1c9d: Status 404 returned error can't find the container with id 3400200c7144f00cbdb00e101b7366d32b06932b58fd8100cb7d507259dd1c9d Jan 25 00:35:07 crc kubenswrapper[4947]: I0125 00:35:07.924549 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6856cfb745-4gkbj" event={"ID":"b6cfc9d0-598c-4149-8a38-ec02ced8d2b8","Type":"ContainerStarted","Data":"3400200c7144f00cbdb00e101b7366d32b06932b58fd8100cb7d507259dd1c9d"} Jan 25 00:35:10 crc kubenswrapper[4947]: I0125 00:35:10.946357 4947 generic.go:334] "Generic (PLEG): container finished" podID="3d82b42a-6236-43af-8190-d28e96b2b933" containerID="a045c02ba900e38ed58a3c74b6d3dc84912734d51ba0245f945e11a4825116da" exitCode=0 Jan 25 00:35:10 crc kubenswrapper[4947]: I0125 00:35:10.946454 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3d82b42a-6236-43af-8190-d28e96b2b933","Type":"ContainerDied","Data":"a045c02ba900e38ed58a3c74b6d3dc84912734d51ba0245f945e11a4825116da"} Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.159264 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.163031 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.165730 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-stf-dockercfg-dgrg2" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.167863 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-generated" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.168089 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-alertmanager-proxy-tls" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.168574 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-tls-assets-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.172088 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-web-config" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.172118 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-cluster-tls-config" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.177666 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.260918 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4888dfdb-3780-4d4b-ad3a-4c1238a72464-tls-assets\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.260962 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-web-config\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.260991 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-config-volume\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.261207 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.261487 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.261643 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brfsp\" (UniqueName: \"kubernetes.io/projected/4888dfdb-3780-4d4b-ad3a-4c1238a72464-kube-api-access-brfsp\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.261835 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4888dfdb-3780-4d4b-ad3a-4c1238a72464-config-out\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.261883 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-36992d4a-ba50-47dc-838a-1aa18f95be08\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36992d4a-ba50-47dc-838a-1aa18f95be08\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.261992 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.363478 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.363543 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.363581 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brfsp\" (UniqueName: \"kubernetes.io/projected/4888dfdb-3780-4d4b-ad3a-4c1238a72464-kube-api-access-brfsp\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.363620 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4888dfdb-3780-4d4b-ad3a-4c1238a72464-config-out\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.363657 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-36992d4a-ba50-47dc-838a-1aa18f95be08\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36992d4a-ba50-47dc-838a-1aa18f95be08\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.363690 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.363724 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4888dfdb-3780-4d4b-ad3a-4c1238a72464-tls-assets\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.363747 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-web-config\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: E0125 00:35:11.363761 4947 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.363774 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-config-volume\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: E0125 00:35:11.363925 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-secret-default-alertmanager-proxy-tls podName:4888dfdb-3780-4d4b-ad3a-4c1238a72464 nodeName:}" failed. No retries permitted until 2026-01-25 00:35:11.863810764 +0000 UTC m=+1551.096801204 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "4888dfdb-3780-4d4b-ad3a-4c1238a72464") : secret "default-alertmanager-proxy-tls" not found Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.368454 4947 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.368585 4947 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-36992d4a-ba50-47dc-838a-1aa18f95be08\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36992d4a-ba50-47dc-838a-1aa18f95be08\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2606f9dadba909059c2a1a29eaa041493cf1f0540adfd02153a60e509d52714b/globalmount\"" pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.369808 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-web-config\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.370153 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4888dfdb-3780-4d4b-ad3a-4c1238a72464-config-out\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.371337 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.375708 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-config-volume\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.380113 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4888dfdb-3780-4d4b-ad3a-4c1238a72464-tls-assets\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.391101 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.394599 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brfsp\" (UniqueName: \"kubernetes.io/projected/4888dfdb-3780-4d4b-ad3a-4c1238a72464-kube-api-access-brfsp\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.420078 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-36992d4a-ba50-47dc-838a-1aa18f95be08\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36992d4a-ba50-47dc-838a-1aa18f95be08\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.871098 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: E0125 00:35:11.871332 4947 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 25 00:35:11 crc kubenswrapper[4947]: E0125 00:35:11.871389 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-secret-default-alertmanager-proxy-tls podName:4888dfdb-3780-4d4b-ad3a-4c1238a72464 nodeName:}" failed. No retries permitted until 2026-01-25 00:35:12.871371871 +0000 UTC m=+1552.104362321 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "4888dfdb-3780-4d4b-ad3a-4c1238a72464") : secret "default-alertmanager-proxy-tls" not found Jan 25 00:35:12 crc kubenswrapper[4947]: I0125 00:35:12.885565 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:12 crc kubenswrapper[4947]: E0125 00:35:12.885702 4947 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 25 00:35:12 crc kubenswrapper[4947]: E0125 00:35:12.885748 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-secret-default-alertmanager-proxy-tls podName:4888dfdb-3780-4d4b-ad3a-4c1238a72464 nodeName:}" failed. No retries permitted until 2026-01-25 00:35:14.885735065 +0000 UTC m=+1554.118725505 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "4888dfdb-3780-4d4b-ad3a-4c1238a72464") : secret "default-alertmanager-proxy-tls" not found Jan 25 00:35:14 crc kubenswrapper[4947]: I0125 00:35:14.916022 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:14 crc kubenswrapper[4947]: I0125 00:35:14.923462 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:15 crc kubenswrapper[4947]: I0125 00:35:15.090227 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:15 crc kubenswrapper[4947]: I0125 00:35:15.602518 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 25 00:35:15 crc kubenswrapper[4947]: W0125 00:35:15.621313 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4888dfdb_3780_4d4b_ad3a_4c1238a72464.slice/crio-9061de30bd526483a4d244468a93cf327930b9b0716d2865e61c53a22127e111 WatchSource:0}: Error finding container 9061de30bd526483a4d244468a93cf327930b9b0716d2865e61c53a22127e111: Status 404 returned error can't find the container with id 9061de30bd526483a4d244468a93cf327930b9b0716d2865e61c53a22127e111 Jan 25 00:35:15 crc kubenswrapper[4947]: I0125 00:35:15.986080 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"4888dfdb-3780-4d4b-ad3a-4c1238a72464","Type":"ContainerStarted","Data":"9061de30bd526483a4d244468a93cf327930b9b0716d2865e61c53a22127e111"} Jan 25 00:35:15 crc kubenswrapper[4947]: I0125 00:35:15.987853 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6856cfb745-4gkbj" event={"ID":"b6cfc9d0-598c-4149-8a38-ec02ced8d2b8","Type":"ContainerStarted","Data":"698db23d4f00bca2e1edf9c88b01b98fa842e3bad20e6bec687ce223f9530e4a"} Jan 25 00:35:16 crc kubenswrapper[4947]: I0125 00:35:16.009195 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-6856cfb745-4gkbj" podStartSLOduration=1.798924161 podStartE2EDuration="9.00917122s" podCreationTimestamp="2026-01-25 00:35:07 +0000 UTC" firstStartedPulling="2026-01-25 00:35:07.920722578 +0000 UTC m=+1547.153713048" lastFinishedPulling="2026-01-25 00:35:15.130969667 +0000 UTC m=+1554.363960107" observedRunningTime="2026-01-25 00:35:16.004434087 +0000 UTC m=+1555.237424527" watchObservedRunningTime="2026-01-25 00:35:16.00917122 +0000 UTC m=+1555.242161660" Jan 25 00:35:17 crc kubenswrapper[4947]: I0125 00:35:17.072565 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:35:17 crc kubenswrapper[4947]: I0125 00:35:17.072903 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:35:17 crc kubenswrapper[4947]: I0125 00:35:17.072941 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:35:17 crc kubenswrapper[4947]: I0125 00:35:17.073455 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f"} pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 25 00:35:17 crc kubenswrapper[4947]: I0125 00:35:17.073508 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" containerID="cri-o://71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" gracePeriod=600 Jan 25 00:35:17 crc kubenswrapper[4947]: E0125 00:35:17.524327 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:35:18 crc kubenswrapper[4947]: I0125 00:35:18.002369 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"4888dfdb-3780-4d4b-ad3a-4c1238a72464","Type":"ContainerStarted","Data":"08ee944f8a5383a5c7dd356f384460db3c5d3bfe49cc6c3d9f6ce605b1501c65"} Jan 25 00:35:18 crc kubenswrapper[4947]: I0125 00:35:18.005174 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" exitCode=0 Jan 25 00:35:18 crc kubenswrapper[4947]: I0125 00:35:18.005214 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerDied","Data":"71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f"} Jan 25 00:35:18 crc kubenswrapper[4947]: I0125 00:35:18.005243 4947 scope.go:117] "RemoveContainer" containerID="e5fe371d16c66e0e03d5287fecc388949acddf08b1c19945fd233890f245ebb7" Jan 25 00:35:18 crc kubenswrapper[4947]: I0125 00:35:18.005622 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:35:18 crc kubenswrapper[4947]: E0125 00:35:18.005870 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:35:20 crc kubenswrapper[4947]: I0125 00:35:20.026702 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3d82b42a-6236-43af-8190-d28e96b2b933","Type":"ContainerStarted","Data":"569db56eeeb09d3ac3f8c9914cd4643b3e8fe4d3b4ba4dcc6aeaa197a2808281"} Jan 25 00:35:22 crc kubenswrapper[4947]: I0125 00:35:22.043536 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3d82b42a-6236-43af-8190-d28e96b2b933","Type":"ContainerStarted","Data":"9f254d35fa94f4389884a69e4c71a2d2ea013a1091ccffab642ff343ab29c9eb"} Jan 25 00:35:24 crc kubenswrapper[4947]: I0125 00:35:24.817639 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf"] Jan 25 00:35:24 crc kubenswrapper[4947]: I0125 00:35:24.818997 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" Jan 25 00:35:24 crc kubenswrapper[4947]: I0125 00:35:24.820886 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-coll-meter-proxy-tls" Jan 25 00:35:24 crc kubenswrapper[4947]: I0125 00:35:24.821060 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-session-secret" Jan 25 00:35:24 crc kubenswrapper[4947]: I0125 00:35:24.821067 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-meter-sg-core-configmap" Jan 25 00:35:24 crc kubenswrapper[4947]: I0125 00:35:24.821325 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-dockercfg-brmnv" Jan 25 00:35:24 crc kubenswrapper[4947]: I0125 00:35:24.834821 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf"] Jan 25 00:35:24 crc kubenswrapper[4947]: I0125 00:35:24.921141 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf\" (UID: \"0d98fa0e-0a1b-4139-b32a-dbc771dc0939\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" Jan 25 00:35:24 crc kubenswrapper[4947]: I0125 00:35:24.921196 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf\" (UID: \"0d98fa0e-0a1b-4139-b32a-dbc771dc0939\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" Jan 25 00:35:24 crc kubenswrapper[4947]: I0125 00:35:24.921216 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf\" (UID: \"0d98fa0e-0a1b-4139-b32a-dbc771dc0939\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" Jan 25 00:35:24 crc kubenswrapper[4947]: I0125 00:35:24.921264 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf\" (UID: \"0d98fa0e-0a1b-4139-b32a-dbc771dc0939\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" Jan 25 00:35:24 crc kubenswrapper[4947]: I0125 00:35:24.921285 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frbp4\" (UniqueName: \"kubernetes.io/projected/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-kube-api-access-frbp4\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf\" (UID: \"0d98fa0e-0a1b-4139-b32a-dbc771dc0939\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" Jan 25 00:35:25 crc kubenswrapper[4947]: I0125 00:35:25.023345 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf\" (UID: \"0d98fa0e-0a1b-4139-b32a-dbc771dc0939\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" Jan 25 00:35:25 crc kubenswrapper[4947]: I0125 00:35:25.023460 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frbp4\" (UniqueName: \"kubernetes.io/projected/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-kube-api-access-frbp4\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf\" (UID: \"0d98fa0e-0a1b-4139-b32a-dbc771dc0939\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" Jan 25 00:35:25 crc kubenswrapper[4947]: I0125 00:35:25.023702 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf\" (UID: \"0d98fa0e-0a1b-4139-b32a-dbc771dc0939\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" Jan 25 00:35:25 crc kubenswrapper[4947]: I0125 00:35:25.023784 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf\" (UID: \"0d98fa0e-0a1b-4139-b32a-dbc771dc0939\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" Jan 25 00:35:25 crc kubenswrapper[4947]: I0125 00:35:25.023835 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf\" (UID: \"0d98fa0e-0a1b-4139-b32a-dbc771dc0939\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" Jan 25 00:35:25 crc kubenswrapper[4947]: I0125 00:35:25.024449 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf\" (UID: \"0d98fa0e-0a1b-4139-b32a-dbc771dc0939\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" Jan 25 00:35:25 crc kubenswrapper[4947]: E0125 00:35:25.024614 4947 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Jan 25 00:35:25 crc kubenswrapper[4947]: E0125 00:35:25.024712 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-default-cloud1-coll-meter-proxy-tls podName:0d98fa0e-0a1b-4139-b32a-dbc771dc0939 nodeName:}" failed. No retries permitted until 2026-01-25 00:35:25.524679523 +0000 UTC m=+1564.757670003 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" (UID: "0d98fa0e-0a1b-4139-b32a-dbc771dc0939") : secret "default-cloud1-coll-meter-proxy-tls" not found Jan 25 00:35:25 crc kubenswrapper[4947]: I0125 00:35:25.026390 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf\" (UID: \"0d98fa0e-0a1b-4139-b32a-dbc771dc0939\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" Jan 25 00:35:25 crc kubenswrapper[4947]: I0125 00:35:25.037932 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf\" (UID: \"0d98fa0e-0a1b-4139-b32a-dbc771dc0939\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" Jan 25 00:35:25 crc kubenswrapper[4947]: I0125 00:35:25.045920 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frbp4\" (UniqueName: \"kubernetes.io/projected/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-kube-api-access-frbp4\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf\" (UID: \"0d98fa0e-0a1b-4139-b32a-dbc771dc0939\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" Jan 25 00:35:25 crc kubenswrapper[4947]: I0125 00:35:25.530374 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf\" (UID: \"0d98fa0e-0a1b-4139-b32a-dbc771dc0939\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" Jan 25 00:35:25 crc kubenswrapper[4947]: E0125 00:35:25.530510 4947 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Jan 25 00:35:25 crc kubenswrapper[4947]: E0125 00:35:25.530556 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-default-cloud1-coll-meter-proxy-tls podName:0d98fa0e-0a1b-4139-b32a-dbc771dc0939 nodeName:}" failed. No retries permitted until 2026-01-25 00:35:26.530541654 +0000 UTC m=+1565.763532094 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" (UID: "0d98fa0e-0a1b-4139-b32a-dbc771dc0939") : secret "default-cloud1-coll-meter-proxy-tls" not found Jan 25 00:35:26 crc kubenswrapper[4947]: I0125 00:35:26.544353 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf\" (UID: \"0d98fa0e-0a1b-4139-b32a-dbc771dc0939\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" Jan 25 00:35:26 crc kubenswrapper[4947]: I0125 00:35:26.554386 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf\" (UID: \"0d98fa0e-0a1b-4139-b32a-dbc771dc0939\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" Jan 25 00:35:26 crc kubenswrapper[4947]: I0125 00:35:26.687082 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.006823 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n"] Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.008922 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.014078 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-ceil-meter-proxy-tls" Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.014221 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-meter-sg-core-configmap" Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.027399 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n"] Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.081940 4947 generic.go:334] "Generic (PLEG): container finished" podID="4888dfdb-3780-4d4b-ad3a-4c1238a72464" containerID="08ee944f8a5383a5c7dd356f384460db3c5d3bfe49cc6c3d9f6ce605b1501c65" exitCode=0 Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.082001 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"4888dfdb-3780-4d4b-ad3a-4c1238a72464","Type":"ContainerDied","Data":"08ee944f8a5383a5c7dd356f384460db3c5d3bfe49cc6c3d9f6ce605b1501c65"} Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.153646 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/d0837a20-7313-4da0-9df6-1ce849d1f029-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n\" (UID: \"d0837a20-7313-4da0-9df6-1ce849d1f029\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.153700 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/d0837a20-7313-4da0-9df6-1ce849d1f029-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n\" (UID: \"d0837a20-7313-4da0-9df6-1ce849d1f029\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.153746 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/d0837a20-7313-4da0-9df6-1ce849d1f029-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n\" (UID: \"d0837a20-7313-4da0-9df6-1ce849d1f029\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.153826 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx9x9\" (UniqueName: \"kubernetes.io/projected/d0837a20-7313-4da0-9df6-1ce849d1f029-kube-api-access-vx9x9\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n\" (UID: \"d0837a20-7313-4da0-9df6-1ce849d1f029\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.153868 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d0837a20-7313-4da0-9df6-1ce849d1f029-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n\" (UID: \"d0837a20-7313-4da0-9df6-1ce849d1f029\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.180019 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf"] Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.254869 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d0837a20-7313-4da0-9df6-1ce849d1f029-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n\" (UID: \"d0837a20-7313-4da0-9df6-1ce849d1f029\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.255041 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/d0837a20-7313-4da0-9df6-1ce849d1f029-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n\" (UID: \"d0837a20-7313-4da0-9df6-1ce849d1f029\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.255111 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/d0837a20-7313-4da0-9df6-1ce849d1f029-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n\" (UID: \"d0837a20-7313-4da0-9df6-1ce849d1f029\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.255251 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/d0837a20-7313-4da0-9df6-1ce849d1f029-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n\" (UID: \"d0837a20-7313-4da0-9df6-1ce849d1f029\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.255528 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx9x9\" (UniqueName: \"kubernetes.io/projected/d0837a20-7313-4da0-9df6-1ce849d1f029-kube-api-access-vx9x9\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n\" (UID: \"d0837a20-7313-4da0-9df6-1ce849d1f029\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" Jan 25 00:35:27 crc kubenswrapper[4947]: E0125 00:35:27.256418 4947 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 25 00:35:27 crc kubenswrapper[4947]: E0125 00:35:27.256479 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0837a20-7313-4da0-9df6-1ce849d1f029-default-cloud1-ceil-meter-proxy-tls podName:d0837a20-7313-4da0-9df6-1ce849d1f029 nodeName:}" failed. No retries permitted until 2026-01-25 00:35:27.756463192 +0000 UTC m=+1566.989453632 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/d0837a20-7313-4da0-9df6-1ce849d1f029-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" (UID: "d0837a20-7313-4da0-9df6-1ce849d1f029") : secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.256512 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/d0837a20-7313-4da0-9df6-1ce849d1f029-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n\" (UID: \"d0837a20-7313-4da0-9df6-1ce849d1f029\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.257677 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/d0837a20-7313-4da0-9df6-1ce849d1f029-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n\" (UID: \"d0837a20-7313-4da0-9df6-1ce849d1f029\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.262168 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/d0837a20-7313-4da0-9df6-1ce849d1f029-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n\" (UID: \"d0837a20-7313-4da0-9df6-1ce849d1f029\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.275713 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx9x9\" (UniqueName: \"kubernetes.io/projected/d0837a20-7313-4da0-9df6-1ce849d1f029-kube-api-access-vx9x9\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n\" (UID: \"d0837a20-7313-4da0-9df6-1ce849d1f029\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.761442 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d0837a20-7313-4da0-9df6-1ce849d1f029-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n\" (UID: \"d0837a20-7313-4da0-9df6-1ce849d1f029\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" Jan 25 00:35:27 crc kubenswrapper[4947]: E0125 00:35:27.761834 4947 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 25 00:35:27 crc kubenswrapper[4947]: E0125 00:35:27.761885 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0837a20-7313-4da0-9df6-1ce849d1f029-default-cloud1-ceil-meter-proxy-tls podName:d0837a20-7313-4da0-9df6-1ce849d1f029 nodeName:}" failed. No retries permitted until 2026-01-25 00:35:28.76187151 +0000 UTC m=+1567.994861950 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/d0837a20-7313-4da0-9df6-1ce849d1f029-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" (UID: "d0837a20-7313-4da0-9df6-1ce849d1f029") : secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 25 00:35:28 crc kubenswrapper[4947]: I0125 00:35:28.090578 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" event={"ID":"0d98fa0e-0a1b-4139-b32a-dbc771dc0939","Type":"ContainerStarted","Data":"4d97dfbde1af11c56f34ab0d2caa8202894d0c76cee6f888619471b880c92853"} Jan 25 00:35:28 crc kubenswrapper[4947]: I0125 00:35:28.777875 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d0837a20-7313-4da0-9df6-1ce849d1f029-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n\" (UID: \"d0837a20-7313-4da0-9df6-1ce849d1f029\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" Jan 25 00:35:28 crc kubenswrapper[4947]: I0125 00:35:28.782819 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d0837a20-7313-4da0-9df6-1ce849d1f029-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n\" (UID: \"d0837a20-7313-4da0-9df6-1ce849d1f029\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" Jan 25 00:35:28 crc kubenswrapper[4947]: I0125 00:35:28.837733 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" Jan 25 00:35:30 crc kubenswrapper[4947]: I0125 00:35:30.089906 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:35:30 crc kubenswrapper[4947]: E0125 00:35:30.090652 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:35:31 crc kubenswrapper[4947]: I0125 00:35:31.646554 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm"] Jan 25 00:35:31 crc kubenswrapper[4947]: I0125 00:35:31.648415 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" Jan 25 00:35:31 crc kubenswrapper[4947]: I0125 00:35:31.654785 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-sens-meter-sg-core-configmap" Jan 25 00:35:31 crc kubenswrapper[4947]: I0125 00:35:31.655171 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-sens-meter-proxy-tls" Jan 25 00:35:31 crc kubenswrapper[4947]: I0125 00:35:31.656713 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm"] Jan 25 00:35:31 crc kubenswrapper[4947]: I0125 00:35:31.834387 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m97zx\" (UniqueName: \"kubernetes.io/projected/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-kube-api-access-m97zx\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm\" (UID: \"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" Jan 25 00:35:31 crc kubenswrapper[4947]: I0125 00:35:31.834468 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm\" (UID: \"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" Jan 25 00:35:31 crc kubenswrapper[4947]: I0125 00:35:31.834492 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm\" (UID: \"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" Jan 25 00:35:31 crc kubenswrapper[4947]: I0125 00:35:31.834513 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm\" (UID: \"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" Jan 25 00:35:31 crc kubenswrapper[4947]: I0125 00:35:31.834565 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm\" (UID: \"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" Jan 25 00:35:31 crc kubenswrapper[4947]: I0125 00:35:31.935969 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm\" (UID: \"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" Jan 25 00:35:31 crc kubenswrapper[4947]: I0125 00:35:31.936019 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm\" (UID: \"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" Jan 25 00:35:31 crc kubenswrapper[4947]: I0125 00:35:31.936041 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm\" (UID: \"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" Jan 25 00:35:31 crc kubenswrapper[4947]: I0125 00:35:31.936064 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm\" (UID: \"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" Jan 25 00:35:31 crc kubenswrapper[4947]: I0125 00:35:31.936116 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m97zx\" (UniqueName: \"kubernetes.io/projected/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-kube-api-access-m97zx\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm\" (UID: \"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" Jan 25 00:35:31 crc kubenswrapper[4947]: E0125 00:35:31.936542 4947 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Jan 25 00:35:31 crc kubenswrapper[4947]: E0125 00:35:31.936591 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-default-cloud1-sens-meter-proxy-tls podName:5ef6c1dd-abb0-4c2f-8aa5-13614c09e445 nodeName:}" failed. No retries permitted until 2026-01-25 00:35:32.436575636 +0000 UTC m=+1571.669566076 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" (UID: "5ef6c1dd-abb0-4c2f-8aa5-13614c09e445") : secret "default-cloud1-sens-meter-proxy-tls" not found Jan 25 00:35:31 crc kubenswrapper[4947]: I0125 00:35:31.937565 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm\" (UID: \"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" Jan 25 00:35:31 crc kubenswrapper[4947]: I0125 00:35:31.938590 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm\" (UID: \"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" Jan 25 00:35:31 crc kubenswrapper[4947]: I0125 00:35:31.953272 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm\" (UID: \"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" Jan 25 00:35:31 crc kubenswrapper[4947]: I0125 00:35:31.956286 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m97zx\" (UniqueName: \"kubernetes.io/projected/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-kube-api-access-m97zx\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm\" (UID: \"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" Jan 25 00:35:32 crc kubenswrapper[4947]: I0125 00:35:32.441824 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm\" (UID: \"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" Jan 25 00:35:32 crc kubenswrapper[4947]: E0125 00:35:32.441971 4947 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Jan 25 00:35:32 crc kubenswrapper[4947]: E0125 00:35:32.442066 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-default-cloud1-sens-meter-proxy-tls podName:5ef6c1dd-abb0-4c2f-8aa5-13614c09e445 nodeName:}" failed. No retries permitted until 2026-01-25 00:35:33.442042105 +0000 UTC m=+1572.675032565 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" (UID: "5ef6c1dd-abb0-4c2f-8aa5-13614c09e445") : secret "default-cloud1-sens-meter-proxy-tls" not found Jan 25 00:35:32 crc kubenswrapper[4947]: I0125 00:35:32.935517 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n"] Jan 25 00:35:33 crc kubenswrapper[4947]: I0125 00:35:33.457228 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm\" (UID: \"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" Jan 25 00:35:33 crc kubenswrapper[4947]: I0125 00:35:33.495195 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm\" (UID: \"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" Jan 25 00:35:33 crc kubenswrapper[4947]: I0125 00:35:33.781475 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" Jan 25 00:35:34 crc kubenswrapper[4947]: I0125 00:35:34.143690 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3d82b42a-6236-43af-8190-d28e96b2b933","Type":"ContainerStarted","Data":"89afa61e3914ce8cb8693340cc5cc3c32db2918704ce204fe9d11a8dd7035a58"} Jan 25 00:35:34 crc kubenswrapper[4947]: I0125 00:35:34.145933 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" event={"ID":"0d98fa0e-0a1b-4139-b32a-dbc771dc0939","Type":"ContainerStarted","Data":"c8fb5908b4d1240858bee40840ae994e569cc4d5d414f2d1c285dab39bb3029c"} Jan 25 00:35:34 crc kubenswrapper[4947]: I0125 00:35:34.148201 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"4888dfdb-3780-4d4b-ad3a-4c1238a72464","Type":"ContainerStarted","Data":"dd02393a1ef9e451c3aa4e14f76831f95aa211851c0221f15c2b4d3e3616aa30"} Jan 25 00:35:34 crc kubenswrapper[4947]: I0125 00:35:34.149409 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" event={"ID":"d0837a20-7313-4da0-9df6-1ce849d1f029","Type":"ContainerStarted","Data":"cfa6a1ad3e6bc390bd231b034916d7e739a7eb9412b6be67c10036a98ea03803"} Jan 25 00:35:34 crc kubenswrapper[4947]: I0125 00:35:34.167902 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=4.443256582 podStartE2EDuration="38.167885162s" podCreationTimestamp="2026-01-25 00:34:56 +0000 UTC" firstStartedPulling="2026-01-25 00:34:59.788901445 +0000 UTC m=+1539.021891905" lastFinishedPulling="2026-01-25 00:35:33.513530045 +0000 UTC m=+1572.746520485" observedRunningTime="2026-01-25 00:35:34.164526845 +0000 UTC m=+1573.397517285" watchObservedRunningTime="2026-01-25 00:35:34.167885162 +0000 UTC m=+1573.400875602" Jan 25 00:35:34 crc kubenswrapper[4947]: I0125 00:35:34.191987 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm"] Jan 25 00:35:34 crc kubenswrapper[4947]: I0125 00:35:34.296797 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/prometheus-default-0" Jan 25 00:35:35 crc kubenswrapper[4947]: I0125 00:35:35.158732 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" event={"ID":"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445","Type":"ContainerStarted","Data":"63f51f9bec4252effca2b75fa251bd57f899aa6aceb9e547c853c07a53ddd559"} Jan 25 00:35:35 crc kubenswrapper[4947]: I0125 00:35:35.160235 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" event={"ID":"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445","Type":"ContainerStarted","Data":"9e73e5a0278310d93f36aa55d1882513dae576a683ba398eb59f9c836ffed722"} Jan 25 00:35:35 crc kubenswrapper[4947]: I0125 00:35:35.161997 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" event={"ID":"0d98fa0e-0a1b-4139-b32a-dbc771dc0939","Type":"ContainerStarted","Data":"cfcf79fed5bba223ce07589542b769fe8fff87686f83fd8454f58b1e7ee7574b"} Jan 25 00:35:35 crc kubenswrapper[4947]: I0125 00:35:35.164867 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" event={"ID":"d0837a20-7313-4da0-9df6-1ce849d1f029","Type":"ContainerStarted","Data":"31bf172e06d3d6ed5dd0c2957898aced9ccadd1061a4b014252d4a8765250880"} Jan 25 00:35:35 crc kubenswrapper[4947]: I0125 00:35:35.164901 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" event={"ID":"d0837a20-7313-4da0-9df6-1ce849d1f029","Type":"ContainerStarted","Data":"51738f9e40b34f8bc1cf8fa8e4a44db2419a20877eb84aefbcbdad9edda367f6"} Jan 25 00:35:36 crc kubenswrapper[4947]: I0125 00:35:36.178515 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" event={"ID":"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445","Type":"ContainerStarted","Data":"952628d963a6ca1dc30326049a85e581da8b18963ce22aa7b39a33466cdae78a"} Jan 25 00:35:36 crc kubenswrapper[4947]: I0125 00:35:36.183690 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"4888dfdb-3780-4d4b-ad3a-4c1238a72464","Type":"ContainerStarted","Data":"778a0a35697b5c4b629853bec5fd9b728a61b9623a0bdf8cb70b8a05bd1f8bad"} Jan 25 00:35:36 crc kubenswrapper[4947]: I0125 00:35:36.184066 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"4888dfdb-3780-4d4b-ad3a-4c1238a72464","Type":"ContainerStarted","Data":"66b8ebe4fed73736994a2bf82364afc019d0f54c3066ac67438ca706e938bcb6"} Jan 25 00:35:36 crc kubenswrapper[4947]: I0125 00:35:36.223972 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=17.430590389 podStartE2EDuration="26.223946743s" podCreationTimestamp="2026-01-25 00:35:10 +0000 UTC" firstStartedPulling="2026-01-25 00:35:27.084400599 +0000 UTC m=+1566.317391049" lastFinishedPulling="2026-01-25 00:35:35.877756963 +0000 UTC m=+1575.110747403" observedRunningTime="2026-01-25 00:35:36.206341353 +0000 UTC m=+1575.439331813" watchObservedRunningTime="2026-01-25 00:35:36.223946743 +0000 UTC m=+1575.456937183" Jan 25 00:35:39 crc kubenswrapper[4947]: I0125 00:35:39.515747 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f"] Jan 25 00:35:39 crc kubenswrapper[4947]: I0125 00:35:39.518012 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" Jan 25 00:35:39 crc kubenswrapper[4947]: I0125 00:35:39.519957 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-cert" Jan 25 00:35:39 crc kubenswrapper[4947]: I0125 00:35:39.521084 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-event-sg-core-configmap" Jan 25 00:35:39 crc kubenswrapper[4947]: I0125 00:35:39.528185 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f"] Jan 25 00:35:39 crc kubenswrapper[4947]: I0125 00:35:39.667220 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/9b3c0215-a9e0-45e1-a844-c93fd70138c9-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-9c5498458-pjx7f\" (UID: \"9b3c0215-a9e0-45e1-a844-c93fd70138c9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" Jan 25 00:35:39 crc kubenswrapper[4947]: I0125 00:35:39.667279 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/9b3c0215-a9e0-45e1-a844-c93fd70138c9-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-9c5498458-pjx7f\" (UID: \"9b3c0215-a9e0-45e1-a844-c93fd70138c9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" Jan 25 00:35:39 crc kubenswrapper[4947]: I0125 00:35:39.667322 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84lnh\" (UniqueName: \"kubernetes.io/projected/9b3c0215-a9e0-45e1-a844-c93fd70138c9-kube-api-access-84lnh\") pod \"default-cloud1-coll-event-smartgateway-9c5498458-pjx7f\" (UID: \"9b3c0215-a9e0-45e1-a844-c93fd70138c9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" Jan 25 00:35:39 crc kubenswrapper[4947]: I0125 00:35:39.667384 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9b3c0215-a9e0-45e1-a844-c93fd70138c9-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-9c5498458-pjx7f\" (UID: \"9b3c0215-a9e0-45e1-a844-c93fd70138c9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" Jan 25 00:35:39 crc kubenswrapper[4947]: I0125 00:35:39.769384 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84lnh\" (UniqueName: \"kubernetes.io/projected/9b3c0215-a9e0-45e1-a844-c93fd70138c9-kube-api-access-84lnh\") pod \"default-cloud1-coll-event-smartgateway-9c5498458-pjx7f\" (UID: \"9b3c0215-a9e0-45e1-a844-c93fd70138c9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" Jan 25 00:35:39 crc kubenswrapper[4947]: I0125 00:35:39.770990 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9b3c0215-a9e0-45e1-a844-c93fd70138c9-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-9c5498458-pjx7f\" (UID: \"9b3c0215-a9e0-45e1-a844-c93fd70138c9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" Jan 25 00:35:39 crc kubenswrapper[4947]: I0125 00:35:39.771096 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/9b3c0215-a9e0-45e1-a844-c93fd70138c9-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-9c5498458-pjx7f\" (UID: \"9b3c0215-a9e0-45e1-a844-c93fd70138c9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" Jan 25 00:35:39 crc kubenswrapper[4947]: I0125 00:35:39.771154 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/9b3c0215-a9e0-45e1-a844-c93fd70138c9-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-9c5498458-pjx7f\" (UID: \"9b3c0215-a9e0-45e1-a844-c93fd70138c9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" Jan 25 00:35:39 crc kubenswrapper[4947]: I0125 00:35:39.772306 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9b3c0215-a9e0-45e1-a844-c93fd70138c9-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-9c5498458-pjx7f\" (UID: \"9b3c0215-a9e0-45e1-a844-c93fd70138c9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" Jan 25 00:35:39 crc kubenswrapper[4947]: I0125 00:35:39.774028 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/9b3c0215-a9e0-45e1-a844-c93fd70138c9-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-9c5498458-pjx7f\" (UID: \"9b3c0215-a9e0-45e1-a844-c93fd70138c9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" Jan 25 00:35:39 crc kubenswrapper[4947]: I0125 00:35:39.779154 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/9b3c0215-a9e0-45e1-a844-c93fd70138c9-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-9c5498458-pjx7f\" (UID: \"9b3c0215-a9e0-45e1-a844-c93fd70138c9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" Jan 25 00:35:39 crc kubenswrapper[4947]: I0125 00:35:39.791995 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84lnh\" (UniqueName: \"kubernetes.io/projected/9b3c0215-a9e0-45e1-a844-c93fd70138c9-kube-api-access-84lnh\") pod \"default-cloud1-coll-event-smartgateway-9c5498458-pjx7f\" (UID: \"9b3c0215-a9e0-45e1-a844-c93fd70138c9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" Jan 25 00:35:39 crc kubenswrapper[4947]: I0125 00:35:39.846006 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" Jan 25 00:35:40 crc kubenswrapper[4947]: I0125 00:35:40.693061 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c"] Jan 25 00:35:40 crc kubenswrapper[4947]: I0125 00:35:40.694425 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" Jan 25 00:35:40 crc kubenswrapper[4947]: I0125 00:35:40.696669 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-event-sg-core-configmap" Jan 25 00:35:40 crc kubenswrapper[4947]: I0125 00:35:40.713712 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c"] Jan 25 00:35:40 crc kubenswrapper[4947]: I0125 00:35:40.791731 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/b022a945-2af3-4275-bc4b-5db0790be691-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-899c7f46d-5982c\" (UID: \"b022a945-2af3-4275-bc4b-5db0790be691\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" Jan 25 00:35:40 crc kubenswrapper[4947]: I0125 00:35:40.792096 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/b022a945-2af3-4275-bc4b-5db0790be691-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-899c7f46d-5982c\" (UID: \"b022a945-2af3-4275-bc4b-5db0790be691\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" Jan 25 00:35:40 crc kubenswrapper[4947]: I0125 00:35:40.792156 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckbm7\" (UniqueName: \"kubernetes.io/projected/b022a945-2af3-4275-bc4b-5db0790be691-kube-api-access-ckbm7\") pod \"default-cloud1-ceil-event-smartgateway-899c7f46d-5982c\" (UID: \"b022a945-2af3-4275-bc4b-5db0790be691\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" Jan 25 00:35:40 crc kubenswrapper[4947]: I0125 00:35:40.792187 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/b022a945-2af3-4275-bc4b-5db0790be691-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-899c7f46d-5982c\" (UID: \"b022a945-2af3-4275-bc4b-5db0790be691\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" Jan 25 00:35:40 crc kubenswrapper[4947]: I0125 00:35:40.893830 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckbm7\" (UniqueName: \"kubernetes.io/projected/b022a945-2af3-4275-bc4b-5db0790be691-kube-api-access-ckbm7\") pod \"default-cloud1-ceil-event-smartgateway-899c7f46d-5982c\" (UID: \"b022a945-2af3-4275-bc4b-5db0790be691\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" Jan 25 00:35:40 crc kubenswrapper[4947]: I0125 00:35:40.893891 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/b022a945-2af3-4275-bc4b-5db0790be691-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-899c7f46d-5982c\" (UID: \"b022a945-2af3-4275-bc4b-5db0790be691\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" Jan 25 00:35:40 crc kubenswrapper[4947]: I0125 00:35:40.893930 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/b022a945-2af3-4275-bc4b-5db0790be691-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-899c7f46d-5982c\" (UID: \"b022a945-2af3-4275-bc4b-5db0790be691\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" Jan 25 00:35:40 crc kubenswrapper[4947]: I0125 00:35:40.893979 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/b022a945-2af3-4275-bc4b-5db0790be691-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-899c7f46d-5982c\" (UID: \"b022a945-2af3-4275-bc4b-5db0790be691\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" Jan 25 00:35:40 crc kubenswrapper[4947]: I0125 00:35:40.895822 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/b022a945-2af3-4275-bc4b-5db0790be691-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-899c7f46d-5982c\" (UID: \"b022a945-2af3-4275-bc4b-5db0790be691\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" Jan 25 00:35:40 crc kubenswrapper[4947]: I0125 00:35:40.899920 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/b022a945-2af3-4275-bc4b-5db0790be691-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-899c7f46d-5982c\" (UID: \"b022a945-2af3-4275-bc4b-5db0790be691\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" Jan 25 00:35:40 crc kubenswrapper[4947]: I0125 00:35:40.901410 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/b022a945-2af3-4275-bc4b-5db0790be691-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-899c7f46d-5982c\" (UID: \"b022a945-2af3-4275-bc4b-5db0790be691\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" Jan 25 00:35:40 crc kubenswrapper[4947]: I0125 00:35:40.906959 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f"] Jan 25 00:35:40 crc kubenswrapper[4947]: I0125 00:35:40.910739 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckbm7\" (UniqueName: \"kubernetes.io/projected/b022a945-2af3-4275-bc4b-5db0790be691-kube-api-access-ckbm7\") pod \"default-cloud1-ceil-event-smartgateway-899c7f46d-5982c\" (UID: \"b022a945-2af3-4275-bc4b-5db0790be691\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" Jan 25 00:35:41 crc kubenswrapper[4947]: I0125 00:35:41.008634 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" Jan 25 00:35:41 crc kubenswrapper[4947]: I0125 00:35:41.253293 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" event={"ID":"9b3c0215-a9e0-45e1-a844-c93fd70138c9","Type":"ContainerStarted","Data":"d9ce820614e5d7a763129eac969f4e1c9c86f14f7a6adc4b28a0257f580b10cd"} Jan 25 00:35:41 crc kubenswrapper[4947]: I0125 00:35:41.253342 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" event={"ID":"9b3c0215-a9e0-45e1-a844-c93fd70138c9","Type":"ContainerStarted","Data":"760ac999d60a248748a1ee5dbc898b19e76b9e0dd5eb790f71442581c0b7c206"} Jan 25 00:35:41 crc kubenswrapper[4947]: I0125 00:35:41.261436 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" event={"ID":"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445","Type":"ContainerStarted","Data":"c37b71c00d0cd5a7f621e61de65d3ac4b0d98cefac0f8d02ed6117499871f294"} Jan 25 00:35:41 crc kubenswrapper[4947]: I0125 00:35:41.268000 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" event={"ID":"0d98fa0e-0a1b-4139-b32a-dbc771dc0939","Type":"ContainerStarted","Data":"6c3ec49fb78816b957de774b43eca859b84afab9bc1c17ce5ccbcdbdc9629800"} Jan 25 00:35:41 crc kubenswrapper[4947]: I0125 00:35:41.281983 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" event={"ID":"d0837a20-7313-4da0-9df6-1ce849d1f029","Type":"ContainerStarted","Data":"d5c02e9b61650bb2174f7eed8f48b8bea2b5004263ebc20f70626bd94c40d2b1"} Jan 25 00:35:41 crc kubenswrapper[4947]: I0125 00:35:41.284370 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" podStartSLOduration=3.769903102 podStartE2EDuration="10.284350156s" podCreationTimestamp="2026-01-25 00:35:31 +0000 UTC" firstStartedPulling="2026-01-25 00:35:34.203883422 +0000 UTC m=+1573.436873862" lastFinishedPulling="2026-01-25 00:35:40.718330476 +0000 UTC m=+1579.951320916" observedRunningTime="2026-01-25 00:35:41.283920215 +0000 UTC m=+1580.516910655" watchObservedRunningTime="2026-01-25 00:35:41.284350156 +0000 UTC m=+1580.517340596" Jan 25 00:35:41 crc kubenswrapper[4947]: I0125 00:35:41.360849 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" podStartSLOduration=3.773651158 podStartE2EDuration="17.360828693s" podCreationTimestamp="2026-01-25 00:35:24 +0000 UTC" firstStartedPulling="2026-01-25 00:35:27.187345628 +0000 UTC m=+1566.420336068" lastFinishedPulling="2026-01-25 00:35:40.774523163 +0000 UTC m=+1580.007513603" observedRunningTime="2026-01-25 00:35:41.359388536 +0000 UTC m=+1580.592378986" watchObservedRunningTime="2026-01-25 00:35:41.360828693 +0000 UTC m=+1580.593819133" Jan 25 00:35:41 crc kubenswrapper[4947]: I0125 00:35:41.364273 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" podStartSLOduration=7.841365465 podStartE2EDuration="15.364254012s" podCreationTimestamp="2026-01-25 00:35:26 +0000 UTC" firstStartedPulling="2026-01-25 00:35:33.323095692 +0000 UTC m=+1572.556086132" lastFinishedPulling="2026-01-25 00:35:40.845984239 +0000 UTC m=+1580.078974679" observedRunningTime="2026-01-25 00:35:41.324468854 +0000 UTC m=+1580.557459294" watchObservedRunningTime="2026-01-25 00:35:41.364254012 +0000 UTC m=+1580.597244452" Jan 25 00:35:41 crc kubenswrapper[4947]: I0125 00:35:41.550818 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c"] Jan 25 00:35:41 crc kubenswrapper[4947]: W0125 00:35:41.553785 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb022a945_2af3_4275_bc4b_5db0790be691.slice/crio-f5ca8876c4d893e65d935a3e684f23d91ef308c1f393325fc7652bb85d3a7133 WatchSource:0}: Error finding container f5ca8876c4d893e65d935a3e684f23d91ef308c1f393325fc7652bb85d3a7133: Status 404 returned error can't find the container with id f5ca8876c4d893e65d935a3e684f23d91ef308c1f393325fc7652bb85d3a7133 Jan 25 00:35:42 crc kubenswrapper[4947]: I0125 00:35:42.290541 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" event={"ID":"b022a945-2af3-4275-bc4b-5db0790be691","Type":"ContainerStarted","Data":"07e470f4131ce01a4bea2c3222308a0262feb230fea4756d471f02edaafa3cff"} Jan 25 00:35:42 crc kubenswrapper[4947]: I0125 00:35:42.292115 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" event={"ID":"b022a945-2af3-4275-bc4b-5db0790be691","Type":"ContainerStarted","Data":"0c2775766e7822dcc8fc40833d9e3a86b971caee2c80f169a3c9941c2462eb7b"} Jan 25 00:35:42 crc kubenswrapper[4947]: I0125 00:35:42.292278 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" event={"ID":"b022a945-2af3-4275-bc4b-5db0790be691","Type":"ContainerStarted","Data":"f5ca8876c4d893e65d935a3e684f23d91ef308c1f393325fc7652bb85d3a7133"} Jan 25 00:35:42 crc kubenswrapper[4947]: I0125 00:35:42.292941 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" event={"ID":"9b3c0215-a9e0-45e1-a844-c93fd70138c9","Type":"ContainerStarted","Data":"09b2ec1f4cfd62ffdba3f025deddd8d3b5466b10c3908d282236b56898c225d4"} Jan 25 00:35:42 crc kubenswrapper[4947]: I0125 00:35:42.311784 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" podStartSLOduration=2.024195855 podStartE2EDuration="2.311766595s" podCreationTimestamp="2026-01-25 00:35:40 +0000 UTC" firstStartedPulling="2026-01-25 00:35:41.557474398 +0000 UTC m=+1580.790464838" lastFinishedPulling="2026-01-25 00:35:41.845045138 +0000 UTC m=+1581.078035578" observedRunningTime="2026-01-25 00:35:42.30543732 +0000 UTC m=+1581.538427770" watchObservedRunningTime="2026-01-25 00:35:42.311766595 +0000 UTC m=+1581.544757035" Jan 25 00:35:42 crc kubenswrapper[4947]: I0125 00:35:42.324601 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" podStartSLOduration=2.924874903 podStartE2EDuration="3.32458053s" podCreationTimestamp="2026-01-25 00:35:39 +0000 UTC" firstStartedPulling="2026-01-25 00:35:40.920868445 +0000 UTC m=+1580.153858885" lastFinishedPulling="2026-01-25 00:35:41.320574072 +0000 UTC m=+1580.553564512" observedRunningTime="2026-01-25 00:35:42.323842921 +0000 UTC m=+1581.556833381" watchObservedRunningTime="2026-01-25 00:35:42.32458053 +0000 UTC m=+1581.557570970" Jan 25 00:35:43 crc kubenswrapper[4947]: I0125 00:35:43.089674 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:35:43 crc kubenswrapper[4947]: E0125 00:35:43.089895 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:35:44 crc kubenswrapper[4947]: I0125 00:35:44.296554 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Jan 25 00:35:44 crc kubenswrapper[4947]: I0125 00:35:44.347709 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Jan 25 00:35:44 crc kubenswrapper[4947]: I0125 00:35:44.392739 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.167245 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-pvgb8"] Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.168093 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" podUID="7b062645-435c-4461-93cd-8cbe7cd8e733" containerName="default-interconnect" containerID="cri-o://99188ef6337d790fc8a682f69dd76728f1c088acbf4f5174dab2cdba95af3406" gracePeriod=30 Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.366775 4947 generic.go:334] "Generic (PLEG): container finished" podID="9b3c0215-a9e0-45e1-a844-c93fd70138c9" containerID="d9ce820614e5d7a763129eac969f4e1c9c86f14f7a6adc4b28a0257f580b10cd" exitCode=0 Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.366857 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" event={"ID":"9b3c0215-a9e0-45e1-a844-c93fd70138c9","Type":"ContainerDied","Data":"d9ce820614e5d7a763129eac969f4e1c9c86f14f7a6adc4b28a0257f580b10cd"} Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.367504 4947 scope.go:117] "RemoveContainer" containerID="d9ce820614e5d7a763129eac969f4e1c9c86f14f7a6adc4b28a0257f580b10cd" Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.370037 4947 generic.go:334] "Generic (PLEG): container finished" podID="5ef6c1dd-abb0-4c2f-8aa5-13614c09e445" containerID="952628d963a6ca1dc30326049a85e581da8b18963ce22aa7b39a33466cdae78a" exitCode=0 Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.370112 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" event={"ID":"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445","Type":"ContainerDied","Data":"952628d963a6ca1dc30326049a85e581da8b18963ce22aa7b39a33466cdae78a"} Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.372690 4947 scope.go:117] "RemoveContainer" containerID="952628d963a6ca1dc30326049a85e581da8b18963ce22aa7b39a33466cdae78a" Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.384813 4947 generic.go:334] "Generic (PLEG): container finished" podID="7b062645-435c-4461-93cd-8cbe7cd8e733" containerID="99188ef6337d790fc8a682f69dd76728f1c088acbf4f5174dab2cdba95af3406" exitCode=0 Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.384890 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" event={"ID":"7b062645-435c-4461-93cd-8cbe7cd8e733","Type":"ContainerDied","Data":"99188ef6337d790fc8a682f69dd76728f1c088acbf4f5174dab2cdba95af3406"} Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.630544 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.770145 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-sasl-users\") pod \"7b062645-435c-4461-93cd-8cbe7cd8e733\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.770212 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-openstack-ca\") pod \"7b062645-435c-4461-93cd-8cbe7cd8e733\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.770248 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-inter-router-credentials\") pod \"7b062645-435c-4461-93cd-8cbe7cd8e733\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.770324 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-openstack-credentials\") pod \"7b062645-435c-4461-93cd-8cbe7cd8e733\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.770355 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/7b062645-435c-4461-93cd-8cbe7cd8e733-sasl-config\") pod \"7b062645-435c-4461-93cd-8cbe7cd8e733\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.770386 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-inter-router-ca\") pod \"7b062645-435c-4461-93cd-8cbe7cd8e733\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.770410 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vf7l\" (UniqueName: \"kubernetes.io/projected/7b062645-435c-4461-93cd-8cbe7cd8e733-kube-api-access-5vf7l\") pod \"7b062645-435c-4461-93cd-8cbe7cd8e733\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.771462 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b062645-435c-4461-93cd-8cbe7cd8e733-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "7b062645-435c-4461-93cd-8cbe7cd8e733" (UID: "7b062645-435c-4461-93cd-8cbe7cd8e733"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.775162 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "7b062645-435c-4461-93cd-8cbe7cd8e733" (UID: "7b062645-435c-4461-93cd-8cbe7cd8e733"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.781423 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "7b062645-435c-4461-93cd-8cbe7cd8e733" (UID: "7b062645-435c-4461-93cd-8cbe7cd8e733"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.781430 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b062645-435c-4461-93cd-8cbe7cd8e733-kube-api-access-5vf7l" (OuterVolumeSpecName: "kube-api-access-5vf7l") pod "7b062645-435c-4461-93cd-8cbe7cd8e733" (UID: "7b062645-435c-4461-93cd-8cbe7cd8e733"). InnerVolumeSpecName "kube-api-access-5vf7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.781481 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "7b062645-435c-4461-93cd-8cbe7cd8e733" (UID: "7b062645-435c-4461-93cd-8cbe7cd8e733"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.783514 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "7b062645-435c-4461-93cd-8cbe7cd8e733" (UID: "7b062645-435c-4461-93cd-8cbe7cd8e733"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.791235 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "7b062645-435c-4461-93cd-8cbe7cd8e733" (UID: "7b062645-435c-4461-93cd-8cbe7cd8e733"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.872339 4947 reconciler_common.go:293] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-sasl-users\") on node \"crc\" DevicePath \"\"" Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.872386 4947 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.872406 4947 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.872423 4947 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.872436 4947 reconciler_common.go:293] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/7b062645-435c-4461-93cd-8cbe7cd8e733-sasl-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.872448 4947 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.872459 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vf7l\" (UniqueName: \"kubernetes.io/projected/7b062645-435c-4461-93cd-8cbe7cd8e733-kube-api-access-5vf7l\") on node \"crc\" DevicePath \"\"" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.391819 4947 generic.go:334] "Generic (PLEG): container finished" podID="b022a945-2af3-4275-bc4b-5db0790be691" containerID="0c2775766e7822dcc8fc40833d9e3a86b971caee2c80f169a3c9941c2462eb7b" exitCode=0 Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.391876 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" event={"ID":"b022a945-2af3-4275-bc4b-5db0790be691","Type":"ContainerDied","Data":"0c2775766e7822dcc8fc40833d9e3a86b971caee2c80f169a3c9941c2462eb7b"} Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.392351 4947 scope.go:117] "RemoveContainer" containerID="0c2775766e7822dcc8fc40833d9e3a86b971caee2c80f169a3c9941c2462eb7b" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.397238 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" event={"ID":"9b3c0215-a9e0-45e1-a844-c93fd70138c9","Type":"ContainerStarted","Data":"337e296ba4309a43005154c28f2f3bc7aaf976138af72dd9cd336f268755d3b5"} Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.400029 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" event={"ID":"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445","Type":"ContainerStarted","Data":"bf6ac4b92c89fa1eb2bbe3998935500a69af02ce449b2f439c10729177848fce"} Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.402523 4947 generic.go:334] "Generic (PLEG): container finished" podID="0d98fa0e-0a1b-4139-b32a-dbc771dc0939" containerID="cfcf79fed5bba223ce07589542b769fe8fff87686f83fd8454f58b1e7ee7574b" exitCode=0 Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.402593 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" event={"ID":"0d98fa0e-0a1b-4139-b32a-dbc771dc0939","Type":"ContainerDied","Data":"cfcf79fed5bba223ce07589542b769fe8fff87686f83fd8454f58b1e7ee7574b"} Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.403283 4947 scope.go:117] "RemoveContainer" containerID="cfcf79fed5bba223ce07589542b769fe8fff87686f83fd8454f58b1e7ee7574b" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.405532 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" event={"ID":"7b062645-435c-4461-93cd-8cbe7cd8e733","Type":"ContainerDied","Data":"1e919ed0615f4f4827a60cadd255f60dc5f4870e6c12a5fe6e5916224ada10bc"} Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.405581 4947 scope.go:117] "RemoveContainer" containerID="99188ef6337d790fc8a682f69dd76728f1c088acbf4f5174dab2cdba95af3406" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.405669 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.412905 4947 generic.go:334] "Generic (PLEG): container finished" podID="d0837a20-7313-4da0-9df6-1ce849d1f029" containerID="31bf172e06d3d6ed5dd0c2957898aced9ccadd1061a4b014252d4a8765250880" exitCode=0 Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.412959 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" event={"ID":"d0837a20-7313-4da0-9df6-1ce849d1f029","Type":"ContainerDied","Data":"31bf172e06d3d6ed5dd0c2957898aced9ccadd1061a4b014252d4a8765250880"} Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.413389 4947 scope.go:117] "RemoveContainer" containerID="31bf172e06d3d6ed5dd0c2957898aced9ccadd1061a4b014252d4a8765250880" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.538239 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-pvgb8"] Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.546587 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-pvgb8"] Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.669404 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-qb5z8"] Jan 25 00:35:53 crc kubenswrapper[4947]: E0125 00:35:53.670026 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b062645-435c-4461-93cd-8cbe7cd8e733" containerName="default-interconnect" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.670092 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b062645-435c-4461-93cd-8cbe7cd8e733" containerName="default-interconnect" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.670274 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b062645-435c-4461-93cd-8cbe7cd8e733" containerName="default-interconnect" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.670696 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.674683 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-interconnect-sasl-config" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.674946 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-users" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.675117 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-ca" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.675259 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-dockercfg-ktwfn" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.675307 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-credentials" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.675272 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-ca" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.675497 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-credentials" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.684359 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-qb5z8"] Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.790813 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-sasl-config\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.790863 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.790919 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-sasl-users\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.790947 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.790968 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.790986 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5nrl\" (UniqueName: \"kubernetes.io/projected/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-kube-api-access-x5nrl\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.791007 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: E0125 00:35:53.881464 4947 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b3c0215_a9e0_45e1_a844_c93fd70138c9.slice/crio-337e296ba4309a43005154c28f2f3bc7aaf976138af72dd9cd336f268755d3b5.scope\": RecentStats: unable to find data in memory cache]" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.892449 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-sasl-config\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.892514 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.892573 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-sasl-users\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.892601 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.892625 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.892648 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5nrl\" (UniqueName: \"kubernetes.io/projected/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-kube-api-access-x5nrl\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.893288 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.893341 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-sasl-config\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.901320 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-sasl-users\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.901759 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.903405 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.903882 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.908975 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.914532 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5nrl\" (UniqueName: \"kubernetes.io/projected/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-kube-api-access-x5nrl\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.052040 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.089939 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:35:54 crc kubenswrapper[4947]: E0125 00:35:54.090209 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.366750 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.367631 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.370752 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-selfsigned" Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.375294 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"qdr-test-config" Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.381664 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.447255 4947 generic.go:334] "Generic (PLEG): container finished" podID="5ef6c1dd-abb0-4c2f-8aa5-13614c09e445" containerID="bf6ac4b92c89fa1eb2bbe3998935500a69af02ce449b2f439c10729177848fce" exitCode=0 Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.447321 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" event={"ID":"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445","Type":"ContainerDied","Data":"bf6ac4b92c89fa1eb2bbe3998935500a69af02ce449b2f439c10729177848fce"} Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.447354 4947 scope.go:117] "RemoveContainer" containerID="952628d963a6ca1dc30326049a85e581da8b18963ce22aa7b39a33466cdae78a" Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.447881 4947 scope.go:117] "RemoveContainer" containerID="bf6ac4b92c89fa1eb2bbe3998935500a69af02ce449b2f439c10729177848fce" Jan 25 00:35:54 crc kubenswrapper[4947]: E0125 00:35:54.448244 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm_service-telemetry(5ef6c1dd-abb0-4c2f-8aa5-13614c09e445)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" podUID="5ef6c1dd-abb0-4c2f-8aa5-13614c09e445" Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.472250 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" event={"ID":"0d98fa0e-0a1b-4139-b32a-dbc771dc0939","Type":"ContainerStarted","Data":"35e16116d28f7d36faafc8c370c2b58fc605be75020a2109beeae992364e6c13"} Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.483354 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" event={"ID":"d0837a20-7313-4da0-9df6-1ce849d1f029","Type":"ContainerStarted","Data":"8a2e587071b057abdd25da40e9803f24478fc5cb9081aa1437831aad84949e07"} Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.505314 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/1e239640-20ad-42e0-8db4-0ada55b1274c-qdr-test-config\") pod \"qdr-test\" (UID: \"1e239640-20ad-42e0-8db4-0ada55b1274c\") " pod="service-telemetry/qdr-test" Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.505376 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/1e239640-20ad-42e0-8db4-0ada55b1274c-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"1e239640-20ad-42e0-8db4-0ada55b1274c\") " pod="service-telemetry/qdr-test" Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.505421 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rhdt\" (UniqueName: \"kubernetes.io/projected/1e239640-20ad-42e0-8db4-0ada55b1274c-kube-api-access-6rhdt\") pod \"qdr-test\" (UID: \"1e239640-20ad-42e0-8db4-0ada55b1274c\") " pod="service-telemetry/qdr-test" Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.510174 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" event={"ID":"b022a945-2af3-4275-bc4b-5db0790be691","Type":"ContainerStarted","Data":"aeea95242d810e307e8bdf31d6bb90ca0e574420d839fb3d8a0912352dc00ac9"} Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.530463 4947 generic.go:334] "Generic (PLEG): container finished" podID="9b3c0215-a9e0-45e1-a844-c93fd70138c9" containerID="337e296ba4309a43005154c28f2f3bc7aaf976138af72dd9cd336f268755d3b5" exitCode=0 Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.530507 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" event={"ID":"9b3c0215-a9e0-45e1-a844-c93fd70138c9","Type":"ContainerDied","Data":"337e296ba4309a43005154c28f2f3bc7aaf976138af72dd9cd336f268755d3b5"} Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.530539 4947 scope.go:117] "RemoveContainer" containerID="d9ce820614e5d7a763129eac969f4e1c9c86f14f7a6adc4b28a0257f580b10cd" Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.531084 4947 scope.go:117] "RemoveContainer" containerID="337e296ba4309a43005154c28f2f3bc7aaf976138af72dd9cd336f268755d3b5" Jan 25 00:35:54 crc kubenswrapper[4947]: E0125 00:35:54.531463 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-9c5498458-pjx7f_service-telemetry(9b3c0215-a9e0-45e1-a844-c93fd70138c9)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" podUID="9b3c0215-a9e0-45e1-a844-c93fd70138c9" Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.568904 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-qb5z8"] Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.606515 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rhdt\" (UniqueName: \"kubernetes.io/projected/1e239640-20ad-42e0-8db4-0ada55b1274c-kube-api-access-6rhdt\") pod \"qdr-test\" (UID: \"1e239640-20ad-42e0-8db4-0ada55b1274c\") " pod="service-telemetry/qdr-test" Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.607001 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/1e239640-20ad-42e0-8db4-0ada55b1274c-qdr-test-config\") pod \"qdr-test\" (UID: \"1e239640-20ad-42e0-8db4-0ada55b1274c\") " pod="service-telemetry/qdr-test" Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.609554 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/1e239640-20ad-42e0-8db4-0ada55b1274c-qdr-test-config\") pod \"qdr-test\" (UID: \"1e239640-20ad-42e0-8db4-0ada55b1274c\") " pod="service-telemetry/qdr-test" Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.607103 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/1e239640-20ad-42e0-8db4-0ada55b1274c-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"1e239640-20ad-42e0-8db4-0ada55b1274c\") " pod="service-telemetry/qdr-test" Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.625151 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/1e239640-20ad-42e0-8db4-0ada55b1274c-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"1e239640-20ad-42e0-8db4-0ada55b1274c\") " pod="service-telemetry/qdr-test" Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.638960 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rhdt\" (UniqueName: \"kubernetes.io/projected/1e239640-20ad-42e0-8db4-0ada55b1274c-kube-api-access-6rhdt\") pod \"qdr-test\" (UID: \"1e239640-20ad-42e0-8db4-0ada55b1274c\") " pod="service-telemetry/qdr-test" Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.683462 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Jan 25 00:35:55 crc kubenswrapper[4947]: I0125 00:35:55.098692 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b062645-435c-4461-93cd-8cbe7cd8e733" path="/var/lib/kubelet/pods/7b062645-435c-4461-93cd-8cbe7cd8e733/volumes" Jan 25 00:35:55 crc kubenswrapper[4947]: I0125 00:35:55.136960 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Jan 25 00:35:55 crc kubenswrapper[4947]: W0125 00:35:55.142455 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e239640_20ad_42e0_8db4_0ada55b1274c.slice/crio-a0910e5e4d03284a7f756a3f4b820f03ba302d65fc240c6f9c42739539e4fd9c WatchSource:0}: Error finding container a0910e5e4d03284a7f756a3f4b820f03ba302d65fc240c6f9c42739539e4fd9c: Status 404 returned error can't find the container with id a0910e5e4d03284a7f756a3f4b820f03ba302d65fc240c6f9c42739539e4fd9c Jan 25 00:35:55 crc kubenswrapper[4947]: I0125 00:35:55.541667 4947 generic.go:334] "Generic (PLEG): container finished" podID="0d98fa0e-0a1b-4139-b32a-dbc771dc0939" containerID="35e16116d28f7d36faafc8c370c2b58fc605be75020a2109beeae992364e6c13" exitCode=0 Jan 25 00:35:55 crc kubenswrapper[4947]: I0125 00:35:55.541721 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" event={"ID":"0d98fa0e-0a1b-4139-b32a-dbc771dc0939","Type":"ContainerDied","Data":"35e16116d28f7d36faafc8c370c2b58fc605be75020a2109beeae992364e6c13"} Jan 25 00:35:55 crc kubenswrapper[4947]: I0125 00:35:55.541750 4947 scope.go:117] "RemoveContainer" containerID="cfcf79fed5bba223ce07589542b769fe8fff87686f83fd8454f58b1e7ee7574b" Jan 25 00:35:55 crc kubenswrapper[4947]: I0125 00:35:55.542328 4947 scope.go:117] "RemoveContainer" containerID="35e16116d28f7d36faafc8c370c2b58fc605be75020a2109beeae992364e6c13" Jan 25 00:35:55 crc kubenswrapper[4947]: E0125 00:35:55.542541 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf_service-telemetry(0d98fa0e-0a1b-4139-b32a-dbc771dc0939)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" podUID="0d98fa0e-0a1b-4139-b32a-dbc771dc0939" Jan 25 00:35:55 crc kubenswrapper[4947]: I0125 00:35:55.546426 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"1e239640-20ad-42e0-8db4-0ada55b1274c","Type":"ContainerStarted","Data":"a0910e5e4d03284a7f756a3f4b820f03ba302d65fc240c6f9c42739539e4fd9c"} Jan 25 00:35:55 crc kubenswrapper[4947]: I0125 00:35:55.549412 4947 generic.go:334] "Generic (PLEG): container finished" podID="d0837a20-7313-4da0-9df6-1ce849d1f029" containerID="8a2e587071b057abdd25da40e9803f24478fc5cb9081aa1437831aad84949e07" exitCode=0 Jan 25 00:35:55 crc kubenswrapper[4947]: I0125 00:35:55.549472 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" event={"ID":"d0837a20-7313-4da0-9df6-1ce849d1f029","Type":"ContainerDied","Data":"8a2e587071b057abdd25da40e9803f24478fc5cb9081aa1437831aad84949e07"} Jan 25 00:35:55 crc kubenswrapper[4947]: I0125 00:35:55.549778 4947 scope.go:117] "RemoveContainer" containerID="8a2e587071b057abdd25da40e9803f24478fc5cb9081aa1437831aad84949e07" Jan 25 00:35:55 crc kubenswrapper[4947]: E0125 00:35:55.549944 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n_service-telemetry(d0837a20-7313-4da0-9df6-1ce849d1f029)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" podUID="d0837a20-7313-4da0-9df6-1ce849d1f029" Jan 25 00:35:55 crc kubenswrapper[4947]: I0125 00:35:55.551231 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" event={"ID":"53107577-1f0a-4c8d-b5e5-81e4d415f3a1","Type":"ContainerStarted","Data":"ccf4c16e8392e9c70c4063788fd88c5fefe69cdef47d6796fde049be46c08036"} Jan 25 00:35:55 crc kubenswrapper[4947]: I0125 00:35:55.551251 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" event={"ID":"53107577-1f0a-4c8d-b5e5-81e4d415f3a1","Type":"ContainerStarted","Data":"9ef20d242c2ad9ed496929283d7ff693cd5bf96f1e3d7716bc2f45a0ca67d0ac"} Jan 25 00:35:55 crc kubenswrapper[4947]: I0125 00:35:55.553316 4947 generic.go:334] "Generic (PLEG): container finished" podID="b022a945-2af3-4275-bc4b-5db0790be691" containerID="aeea95242d810e307e8bdf31d6bb90ca0e574420d839fb3d8a0912352dc00ac9" exitCode=0 Jan 25 00:35:55 crc kubenswrapper[4947]: I0125 00:35:55.553338 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" event={"ID":"b022a945-2af3-4275-bc4b-5db0790be691","Type":"ContainerDied","Data":"aeea95242d810e307e8bdf31d6bb90ca0e574420d839fb3d8a0912352dc00ac9"} Jan 25 00:35:55 crc kubenswrapper[4947]: I0125 00:35:55.553588 4947 scope.go:117] "RemoveContainer" containerID="aeea95242d810e307e8bdf31d6bb90ca0e574420d839fb3d8a0912352dc00ac9" Jan 25 00:35:55 crc kubenswrapper[4947]: E0125 00:35:55.553789 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-899c7f46d-5982c_service-telemetry(b022a945-2af3-4275-bc4b-5db0790be691)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" podUID="b022a945-2af3-4275-bc4b-5db0790be691" Jan 25 00:35:55 crc kubenswrapper[4947]: I0125 00:35:55.596629 4947 scope.go:117] "RemoveContainer" containerID="31bf172e06d3d6ed5dd0c2957898aced9ccadd1061a4b014252d4a8765250880" Jan 25 00:35:55 crc kubenswrapper[4947]: I0125 00:35:55.615642 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" podStartSLOduration=3.615622352 podStartE2EDuration="3.615622352s" podCreationTimestamp="2026-01-25 00:35:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:35:55.613842345 +0000 UTC m=+1594.846832785" watchObservedRunningTime="2026-01-25 00:35:55.615622352 +0000 UTC m=+1594.848612792" Jan 25 00:35:55 crc kubenswrapper[4947]: I0125 00:35:55.643186 4947 scope.go:117] "RemoveContainer" containerID="0c2775766e7822dcc8fc40833d9e3a86b971caee2c80f169a3c9941c2462eb7b" Jan 25 00:36:05 crc kubenswrapper[4947]: I0125 00:36:05.090252 4947 scope.go:117] "RemoveContainer" containerID="bf6ac4b92c89fa1eb2bbe3998935500a69af02ce449b2f439c10729177848fce" Jan 25 00:36:05 crc kubenswrapper[4947]: I0125 00:36:05.090915 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:36:05 crc kubenswrapper[4947]: E0125 00:36:05.091297 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:36:06 crc kubenswrapper[4947]: E0125 00:36:06.971750 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/tripleowallabycentos9/openstack-qdrouterd:current-tripleo" Jan 25 00:36:06 crc kubenswrapper[4947]: E0125 00:36:06.972237 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:qdr,Image:quay.io/tripleowallabycentos9/openstack-qdrouterd:current-tripleo,Command:[/usr/sbin/qdrouterd -c /etc/qpid-dispatch/qdrouterd.conf],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:amqp,HostPort:0,ContainerPort:5672,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-interconnect-selfsigned-cert,ReadOnly:false,MountPath:/etc/pki/tls/certs/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:qdr-test-config,ReadOnly:false,MountPath:/etc/qpid-dispatch/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6rhdt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod qdr-test_service-telemetry(1e239640-20ad-42e0-8db4-0ada55b1274c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 25 00:36:06 crc kubenswrapper[4947]: E0125 00:36:06.973434 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"qdr\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/qdr-test" podUID="1e239640-20ad-42e0-8db4-0ada55b1274c" Jan 25 00:36:07 crc kubenswrapper[4947]: I0125 00:36:07.090293 4947 scope.go:117] "RemoveContainer" containerID="35e16116d28f7d36faafc8c370c2b58fc605be75020a2109beeae992364e6c13" Jan 25 00:36:07 crc kubenswrapper[4947]: I0125 00:36:07.090440 4947 scope.go:117] "RemoveContainer" containerID="8a2e587071b057abdd25da40e9803f24478fc5cb9081aa1437831aad84949e07" Jan 25 00:36:07 crc kubenswrapper[4947]: I0125 00:36:07.644527 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" event={"ID":"d0837a20-7313-4da0-9df6-1ce849d1f029","Type":"ContainerStarted","Data":"a9b1927d41a5bc517fbc0649f1a336e77ed866ebec74337a5668d2eb7759755d"} Jan 25 00:36:07 crc kubenswrapper[4947]: I0125 00:36:07.650283 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" event={"ID":"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445","Type":"ContainerStarted","Data":"5089b3884e5aabf6594b10df4b26d4986df2459c4fae97c1f632658a206cf0d0"} Jan 25 00:36:07 crc kubenswrapper[4947]: I0125 00:36:07.655022 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" event={"ID":"0d98fa0e-0a1b-4139-b32a-dbc771dc0939","Type":"ContainerStarted","Data":"d47a05edfe52f289259cb07f361d7d752bdb8172bb99efc2758000290702c4a9"} Jan 25 00:36:07 crc kubenswrapper[4947]: E0125 00:36:07.656371 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"qdr\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/tripleowallabycentos9/openstack-qdrouterd:current-tripleo\\\"\"" pod="service-telemetry/qdr-test" podUID="1e239640-20ad-42e0-8db4-0ada55b1274c" Jan 25 00:36:09 crc kubenswrapper[4947]: I0125 00:36:09.089398 4947 scope.go:117] "RemoveContainer" containerID="337e296ba4309a43005154c28f2f3bc7aaf976138af72dd9cd336f268755d3b5" Jan 25 00:36:10 crc kubenswrapper[4947]: I0125 00:36:10.090328 4947 scope.go:117] "RemoveContainer" containerID="aeea95242d810e307e8bdf31d6bb90ca0e574420d839fb3d8a0912352dc00ac9" Jan 25 00:36:10 crc kubenswrapper[4947]: I0125 00:36:10.689100 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" event={"ID":"9b3c0215-a9e0-45e1-a844-c93fd70138c9","Type":"ContainerStarted","Data":"e6faf1d91fbec62cbd236f7b59d347475195c4098f4f949b15e10ea150c42b3b"} Jan 25 00:36:10 crc kubenswrapper[4947]: I0125 00:36:10.693515 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" event={"ID":"b022a945-2af3-4275-bc4b-5db0790be691","Type":"ContainerStarted","Data":"f7a5668c5ce3d4fd9817304a4fde7c2cf72e81822d6545d9eac7237029cc1f88"} Jan 25 00:36:19 crc kubenswrapper[4947]: I0125 00:36:19.092662 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:36:19 crc kubenswrapper[4947]: E0125 00:36:19.093345 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:36:20 crc kubenswrapper[4947]: I0125 00:36:20.771898 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"1e239640-20ad-42e0-8db4-0ada55b1274c","Type":"ContainerStarted","Data":"38fc9634cbadc9e242f4d8e8ac0e02dc25fd742c8f1b2a2564426055c35b2c34"} Jan 25 00:36:20 crc kubenswrapper[4947]: I0125 00:36:20.788239 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=1.6112226060000001 podStartE2EDuration="26.788214608s" podCreationTimestamp="2026-01-25 00:35:54 +0000 UTC" firstStartedPulling="2026-01-25 00:35:55.144117549 +0000 UTC m=+1594.377107989" lastFinishedPulling="2026-01-25 00:36:20.321109551 +0000 UTC m=+1619.554099991" observedRunningTime="2026-01-25 00:36:20.783410023 +0000 UTC m=+1620.016400463" watchObservedRunningTime="2026-01-25 00:36:20.788214608 +0000 UTC m=+1620.021205048" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.175756 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-7ffht"] Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.176772 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.179669 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-healthcheck-log" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.179978 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-config" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.180445 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-sensubility-config" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.181540 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-entrypoint-script" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.181781 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-entrypoint-script" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.189554 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-publisher" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.197332 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-7ffht"] Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.322615 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n95pk\" (UniqueName: \"kubernetes.io/projected/b6c83c5f-2bee-41a9-8433-391c8e71812b-kube-api-access-n95pk\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.322690 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.322774 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-collectd-config\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.322820 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-sensubility-config\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.322855 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.322901 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-healthcheck-log\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.323012 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-ceilometer-publisher\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.424612 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-sensubility-config\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.424755 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.424885 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-healthcheck-log\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.424972 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-ceilometer-publisher\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.425052 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n95pk\" (UniqueName: \"kubernetes.io/projected/b6c83c5f-2bee-41a9-8433-391c8e71812b-kube-api-access-n95pk\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.425113 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.425188 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-collectd-config\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.426383 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-sensubility-config\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.426604 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-collectd-config\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.427677 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-ceilometer-publisher\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.427888 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.428787 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-healthcheck-log\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.428968 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.453902 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n95pk\" (UniqueName: \"kubernetes.io/projected/b6c83c5f-2bee-41a9-8433-391c8e71812b-kube-api-access-n95pk\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.496866 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.615159 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.620919 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.636431 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.733205 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfnhv\" (UniqueName: \"kubernetes.io/projected/efbf27b1-75f2-42e1-87b0-d9a6553993eb-kube-api-access-pfnhv\") pod \"curl\" (UID: \"efbf27b1-75f2-42e1-87b0-d9a6553993eb\") " pod="service-telemetry/curl" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.835505 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfnhv\" (UniqueName: \"kubernetes.io/projected/efbf27b1-75f2-42e1-87b0-d9a6553993eb-kube-api-access-pfnhv\") pod \"curl\" (UID: \"efbf27b1-75f2-42e1-87b0-d9a6553993eb\") " pod="service-telemetry/curl" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.857817 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfnhv\" (UniqueName: \"kubernetes.io/projected/efbf27b1-75f2-42e1-87b0-d9a6553993eb-kube-api-access-pfnhv\") pod \"curl\" (UID: \"efbf27b1-75f2-42e1-87b0-d9a6553993eb\") " pod="service-telemetry/curl" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.968662 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 25 00:36:22 crc kubenswrapper[4947]: I0125 00:36:22.015571 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-7ffht"] Jan 25 00:36:22 crc kubenswrapper[4947]: W0125 00:36:22.022932 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6c83c5f_2bee_41a9_8433_391c8e71812b.slice/crio-b22c8ef5e4718a59fd30f99e333f81a3f0e6be9c29962e6b498192b4a437cd92 WatchSource:0}: Error finding container b22c8ef5e4718a59fd30f99e333f81a3f0e6be9c29962e6b498192b4a437cd92: Status 404 returned error can't find the container with id b22c8ef5e4718a59fd30f99e333f81a3f0e6be9c29962e6b498192b4a437cd92 Jan 25 00:36:22 crc kubenswrapper[4947]: I0125 00:36:22.290789 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Jan 25 00:36:22 crc kubenswrapper[4947]: I0125 00:36:22.792328 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-7ffht" event={"ID":"b6c83c5f-2bee-41a9-8433-391c8e71812b","Type":"ContainerStarted","Data":"b22c8ef5e4718a59fd30f99e333f81a3f0e6be9c29962e6b498192b4a437cd92"} Jan 25 00:36:22 crc kubenswrapper[4947]: I0125 00:36:22.793496 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"efbf27b1-75f2-42e1-87b0-d9a6553993eb","Type":"ContainerStarted","Data":"9aeefd58e22e91b203be33a879dd5b664a89c647e26bffc8bcc7b2be35a32d8c"} Jan 25 00:36:24 crc kubenswrapper[4947]: I0125 00:36:24.822636 4947 generic.go:334] "Generic (PLEG): container finished" podID="efbf27b1-75f2-42e1-87b0-d9a6553993eb" containerID="326ef725fcb1dd594f7bbc583c22d8fc4e7e9add78f50e9f61ae71c3ff520d34" exitCode=0 Jan 25 00:36:24 crc kubenswrapper[4947]: I0125 00:36:24.822992 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"efbf27b1-75f2-42e1-87b0-d9a6553993eb","Type":"ContainerDied","Data":"326ef725fcb1dd594f7bbc583c22d8fc4e7e9add78f50e9f61ae71c3ff520d34"} Jan 25 00:36:31 crc kubenswrapper[4947]: I0125 00:36:31.101742 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:36:31 crc kubenswrapper[4947]: E0125 00:36:31.103061 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:36:32 crc kubenswrapper[4947]: I0125 00:36:32.264537 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 25 00:36:32 crc kubenswrapper[4947]: I0125 00:36:32.467058 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfnhv\" (UniqueName: \"kubernetes.io/projected/efbf27b1-75f2-42e1-87b0-d9a6553993eb-kube-api-access-pfnhv\") pod \"efbf27b1-75f2-42e1-87b0-d9a6553993eb\" (UID: \"efbf27b1-75f2-42e1-87b0-d9a6553993eb\") " Jan 25 00:36:32 crc kubenswrapper[4947]: I0125 00:36:32.470531 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_efbf27b1-75f2-42e1-87b0-d9a6553993eb/curl/0.log" Jan 25 00:36:32 crc kubenswrapper[4947]: I0125 00:36:32.471313 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efbf27b1-75f2-42e1-87b0-d9a6553993eb-kube-api-access-pfnhv" (OuterVolumeSpecName: "kube-api-access-pfnhv") pod "efbf27b1-75f2-42e1-87b0-d9a6553993eb" (UID: "efbf27b1-75f2-42e1-87b0-d9a6553993eb"). InnerVolumeSpecName "kube-api-access-pfnhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:36:32 crc kubenswrapper[4947]: I0125 00:36:32.568602 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfnhv\" (UniqueName: \"kubernetes.io/projected/efbf27b1-75f2-42e1-87b0-d9a6553993eb-kube-api-access-pfnhv\") on node \"crc\" DevicePath \"\"" Jan 25 00:36:32 crc kubenswrapper[4947]: I0125 00:36:32.831884 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-4gkbj_b6cfc9d0-598c-4149-8a38-ec02ced8d2b8/prometheus-webhook-snmp/0.log" Jan 25 00:36:32 crc kubenswrapper[4947]: I0125 00:36:32.882784 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-7ffht" event={"ID":"b6c83c5f-2bee-41a9-8433-391c8e71812b","Type":"ContainerStarted","Data":"db6c1923ad105525adf2b51f7b5d0f146fb16811af782b766667e443c43fec08"} Jan 25 00:36:32 crc kubenswrapper[4947]: I0125 00:36:32.883918 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"efbf27b1-75f2-42e1-87b0-d9a6553993eb","Type":"ContainerDied","Data":"9aeefd58e22e91b203be33a879dd5b664a89c647e26bffc8bcc7b2be35a32d8c"} Jan 25 00:36:32 crc kubenswrapper[4947]: I0125 00:36:32.883948 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9aeefd58e22e91b203be33a879dd5b664a89c647e26bffc8bcc7b2be35a32d8c" Jan 25 00:36:32 crc kubenswrapper[4947]: I0125 00:36:32.884001 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 25 00:36:37 crc kubenswrapper[4947]: I0125 00:36:37.928144 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-7ffht" event={"ID":"b6c83c5f-2bee-41a9-8433-391c8e71812b","Type":"ContainerStarted","Data":"c440b36ccd49f738e0b6f93f7e1c26d5c11687d6b9bbecd5955862939a30bcbb"} Jan 25 00:36:37 crc kubenswrapper[4947]: I0125 00:36:37.949372 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-7ffht" podStartSLOduration=1.2883502629999999 podStartE2EDuration="16.949354231s" podCreationTimestamp="2026-01-25 00:36:21 +0000 UTC" firstStartedPulling="2026-01-25 00:36:22.025095318 +0000 UTC m=+1621.258085758" lastFinishedPulling="2026-01-25 00:36:37.686099276 +0000 UTC m=+1636.919089726" observedRunningTime="2026-01-25 00:36:37.943663732 +0000 UTC m=+1637.176654172" watchObservedRunningTime="2026-01-25 00:36:37.949354231 +0000 UTC m=+1637.182344671" Jan 25 00:36:45 crc kubenswrapper[4947]: I0125 00:36:45.089630 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:36:45 crc kubenswrapper[4947]: E0125 00:36:45.090508 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:36:58 crc kubenswrapper[4947]: I0125 00:36:58.089934 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:36:58 crc kubenswrapper[4947]: E0125 00:36:58.091002 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:37:03 crc kubenswrapper[4947]: I0125 00:37:03.046404 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-4gkbj_b6cfc9d0-598c-4149-8a38-ec02ced8d2b8/prometheus-webhook-snmp/0.log" Jan 25 00:37:07 crc kubenswrapper[4947]: I0125 00:37:07.216091 4947 generic.go:334] "Generic (PLEG): container finished" podID="b6c83c5f-2bee-41a9-8433-391c8e71812b" containerID="db6c1923ad105525adf2b51f7b5d0f146fb16811af782b766667e443c43fec08" exitCode=1 Jan 25 00:37:07 crc kubenswrapper[4947]: I0125 00:37:07.216190 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-7ffht" event={"ID":"b6c83c5f-2bee-41a9-8433-391c8e71812b","Type":"ContainerDied","Data":"db6c1923ad105525adf2b51f7b5d0f146fb16811af782b766667e443c43fec08"} Jan 25 00:37:07 crc kubenswrapper[4947]: I0125 00:37:07.217412 4947 scope.go:117] "RemoveContainer" containerID="db6c1923ad105525adf2b51f7b5d0f146fb16811af782b766667e443c43fec08" Jan 25 00:37:09 crc kubenswrapper[4947]: I0125 00:37:09.090454 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:37:09 crc kubenswrapper[4947]: E0125 00:37:09.090750 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:37:10 crc kubenswrapper[4947]: I0125 00:37:10.239053 4947 generic.go:334] "Generic (PLEG): container finished" podID="b6c83c5f-2bee-41a9-8433-391c8e71812b" containerID="c440b36ccd49f738e0b6f93f7e1c26d5c11687d6b9bbecd5955862939a30bcbb" exitCode=0 Jan 25 00:37:10 crc kubenswrapper[4947]: I0125 00:37:10.239118 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-7ffht" event={"ID":"b6c83c5f-2bee-41a9-8433-391c8e71812b","Type":"ContainerDied","Data":"c440b36ccd49f738e0b6f93f7e1c26d5c11687d6b9bbecd5955862939a30bcbb"} Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.588666 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.697608 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-healthcheck-log\") pod \"b6c83c5f-2bee-41a9-8433-391c8e71812b\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.697984 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-ceilometer-entrypoint-script\") pod \"b6c83c5f-2bee-41a9-8433-391c8e71812b\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.698022 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-ceilometer-publisher\") pod \"b6c83c5f-2bee-41a9-8433-391c8e71812b\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.698083 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-sensubility-config\") pod \"b6c83c5f-2bee-41a9-8433-391c8e71812b\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.698151 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-collectd-config\") pod \"b6c83c5f-2bee-41a9-8433-391c8e71812b\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.698181 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-collectd-entrypoint-script\") pod \"b6c83c5f-2bee-41a9-8433-391c8e71812b\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.698239 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n95pk\" (UniqueName: \"kubernetes.io/projected/b6c83c5f-2bee-41a9-8433-391c8e71812b-kube-api-access-n95pk\") pod \"b6c83c5f-2bee-41a9-8433-391c8e71812b\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.704750 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6c83c5f-2bee-41a9-8433-391c8e71812b-kube-api-access-n95pk" (OuterVolumeSpecName: "kube-api-access-n95pk") pod "b6c83c5f-2bee-41a9-8433-391c8e71812b" (UID: "b6c83c5f-2bee-41a9-8433-391c8e71812b"). InnerVolumeSpecName "kube-api-access-n95pk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.715539 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "b6c83c5f-2bee-41a9-8433-391c8e71812b" (UID: "b6c83c5f-2bee-41a9-8433-391c8e71812b"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.719800 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "b6c83c5f-2bee-41a9-8433-391c8e71812b" (UID: "b6c83c5f-2bee-41a9-8433-391c8e71812b"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.720117 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "b6c83c5f-2bee-41a9-8433-391c8e71812b" (UID: "b6c83c5f-2bee-41a9-8433-391c8e71812b"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.721070 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "b6c83c5f-2bee-41a9-8433-391c8e71812b" (UID: "b6c83c5f-2bee-41a9-8433-391c8e71812b"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.729408 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "b6c83c5f-2bee-41a9-8433-391c8e71812b" (UID: "b6c83c5f-2bee-41a9-8433-391c8e71812b"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.736010 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "b6c83c5f-2bee-41a9-8433-391c8e71812b" (UID: "b6c83c5f-2bee-41a9-8433-391c8e71812b"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.808886 4947 reconciler_common.go:293] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-collectd-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.808930 4947 reconciler_common.go:293] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.808943 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n95pk\" (UniqueName: \"kubernetes.io/projected/b6c83c5f-2bee-41a9-8433-391c8e71812b-kube-api-access-n95pk\") on node \"crc\" DevicePath \"\"" Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.808951 4947 reconciler_common.go:293] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-healthcheck-log\") on node \"crc\" DevicePath \"\"" Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.808984 4947 reconciler_common.go:293] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.808992 4947 reconciler_common.go:293] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.809001 4947 reconciler_common.go:293] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-sensubility-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:37:12 crc kubenswrapper[4947]: I0125 00:37:12.263459 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-7ffht" event={"ID":"b6c83c5f-2bee-41a9-8433-391c8e71812b","Type":"ContainerDied","Data":"b22c8ef5e4718a59fd30f99e333f81a3f0e6be9c29962e6b498192b4a437cd92"} Jan 25 00:37:12 crc kubenswrapper[4947]: I0125 00:37:12.263517 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b22c8ef5e4718a59fd30f99e333f81a3f0e6be9c29962e6b498192b4a437cd92" Jan 25 00:37:12 crc kubenswrapper[4947]: I0125 00:37:12.263609 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.045582 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-hsw2k"] Jan 25 00:37:19 crc kubenswrapper[4947]: E0125 00:37:19.048713 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efbf27b1-75f2-42e1-87b0-d9a6553993eb" containerName="curl" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.048903 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="efbf27b1-75f2-42e1-87b0-d9a6553993eb" containerName="curl" Jan 25 00:37:19 crc kubenswrapper[4947]: E0125 00:37:19.049049 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6c83c5f-2bee-41a9-8433-391c8e71812b" containerName="smoketest-ceilometer" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.049254 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6c83c5f-2bee-41a9-8433-391c8e71812b" containerName="smoketest-ceilometer" Jan 25 00:37:19 crc kubenswrapper[4947]: E0125 00:37:19.049457 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6c83c5f-2bee-41a9-8433-391c8e71812b" containerName="smoketest-collectd" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.049631 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6c83c5f-2bee-41a9-8433-391c8e71812b" containerName="smoketest-collectd" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.049987 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6c83c5f-2bee-41a9-8433-391c8e71812b" containerName="smoketest-ceilometer" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.050185 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6c83c5f-2bee-41a9-8433-391c8e71812b" containerName="smoketest-collectd" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.050327 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="efbf27b1-75f2-42e1-87b0-d9a6553993eb" containerName="curl" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.051688 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.058668 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-healthcheck-log" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.058668 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-sensubility-config" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.058764 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-config" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.063328 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-hsw2k"] Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.063409 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-entrypoint-script" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.063450 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-publisher" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.063450 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-entrypoint-script" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.124056 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-healthcheck-log\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.124151 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-collectd-config\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.124419 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.124523 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-ceilometer-publisher\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.124561 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.124849 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-sensubility-config\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.124975 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfprm\" (UniqueName: \"kubernetes.io/projected/da929df5-1e6a-47d5-85b4-6ec2b408203e-kube-api-access-vfprm\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.227363 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-sensubility-config\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.227504 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfprm\" (UniqueName: \"kubernetes.io/projected/da929df5-1e6a-47d5-85b4-6ec2b408203e-kube-api-access-vfprm\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.227601 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-healthcheck-log\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.227670 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-collectd-config\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.227833 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.227901 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-ceilometer-publisher\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.227966 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.229446 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-sensubility-config\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.230400 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.231090 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-healthcheck-log\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.231317 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-ceilometer-publisher\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.234412 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.249412 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-collectd-config\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.261290 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfprm\" (UniqueName: \"kubernetes.io/projected/da929df5-1e6a-47d5-85b4-6ec2b408203e-kube-api-access-vfprm\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.396584 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.623054 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-hsw2k"] Jan 25 00:37:20 crc kubenswrapper[4947]: I0125 00:37:20.319391 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-hsw2k" event={"ID":"da929df5-1e6a-47d5-85b4-6ec2b408203e","Type":"ContainerStarted","Data":"0560ec9982811d5f213c37bad06bd5e70a8476cdac5f4d9ca483d0737f01458b"} Jan 25 00:37:20 crc kubenswrapper[4947]: I0125 00:37:20.319744 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-hsw2k" event={"ID":"da929df5-1e6a-47d5-85b4-6ec2b408203e","Type":"ContainerStarted","Data":"06d6e388fdecd0356703237e6e0279e4c2a5c45a270cbc144c451b647da6f5c4"} Jan 25 00:37:20 crc kubenswrapper[4947]: I0125 00:37:20.319760 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-hsw2k" event={"ID":"da929df5-1e6a-47d5-85b4-6ec2b408203e","Type":"ContainerStarted","Data":"110cf8b2b20c6c29231bf9ef83d8cfeceac4b9679d2965b70318ddee151b410e"} Jan 25 00:37:21 crc kubenswrapper[4947]: I0125 00:37:21.097999 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:37:21 crc kubenswrapper[4947]: E0125 00:37:21.098744 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:37:32 crc kubenswrapper[4947]: I0125 00:37:32.217300 4947 scope.go:117] "RemoveContainer" containerID="8bd3c19ebf87dfa9492f0b6c526fd93072c03b5bf0b404a53c130aa373ce7a49" Jan 25 00:37:32 crc kubenswrapper[4947]: I0125 00:37:32.261235 4947 scope.go:117] "RemoveContainer" containerID="3b9968f7292e05181c1395dde57498d955080e1dcb49d9484e57992f06fed9b1" Jan 25 00:37:33 crc kubenswrapper[4947]: I0125 00:37:33.090700 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:37:33 crc kubenswrapper[4947]: E0125 00:37:33.091157 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:37:46 crc kubenswrapper[4947]: I0125 00:37:46.090032 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:37:46 crc kubenswrapper[4947]: E0125 00:37:46.091203 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:37:52 crc kubenswrapper[4947]: I0125 00:37:52.692891 4947 generic.go:334] "Generic (PLEG): container finished" podID="da929df5-1e6a-47d5-85b4-6ec2b408203e" containerID="0560ec9982811d5f213c37bad06bd5e70a8476cdac5f4d9ca483d0737f01458b" exitCode=0 Jan 25 00:37:52 crc kubenswrapper[4947]: I0125 00:37:52.693013 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-hsw2k" event={"ID":"da929df5-1e6a-47d5-85b4-6ec2b408203e","Type":"ContainerDied","Data":"0560ec9982811d5f213c37bad06bd5e70a8476cdac5f4d9ca483d0737f01458b"} Jan 25 00:37:52 crc kubenswrapper[4947]: I0125 00:37:52.694773 4947 scope.go:117] "RemoveContainer" containerID="0560ec9982811d5f213c37bad06bd5e70a8476cdac5f4d9ca483d0737f01458b" Jan 25 00:37:53 crc kubenswrapper[4947]: I0125 00:37:53.705064 4947 generic.go:334] "Generic (PLEG): container finished" podID="da929df5-1e6a-47d5-85b4-6ec2b408203e" containerID="06d6e388fdecd0356703237e6e0279e4c2a5c45a270cbc144c451b647da6f5c4" exitCode=0 Jan 25 00:37:53 crc kubenswrapper[4947]: I0125 00:37:53.705147 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-hsw2k" event={"ID":"da929df5-1e6a-47d5-85b4-6ec2b408203e","Type":"ContainerDied","Data":"06d6e388fdecd0356703237e6e0279e4c2a5c45a270cbc144c451b647da6f5c4"} Jan 25 00:37:54 crc kubenswrapper[4947]: I0125 00:37:54.977741 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.061341 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfprm\" (UniqueName: \"kubernetes.io/projected/da929df5-1e6a-47d5-85b4-6ec2b408203e-kube-api-access-vfprm\") pod \"da929df5-1e6a-47d5-85b4-6ec2b408203e\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.061436 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-ceilometer-entrypoint-script\") pod \"da929df5-1e6a-47d5-85b4-6ec2b408203e\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.061474 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-healthcheck-log\") pod \"da929df5-1e6a-47d5-85b4-6ec2b408203e\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.061563 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-ceilometer-publisher\") pod \"da929df5-1e6a-47d5-85b4-6ec2b408203e\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.061583 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-sensubility-config\") pod \"da929df5-1e6a-47d5-85b4-6ec2b408203e\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.061620 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-collectd-config\") pod \"da929df5-1e6a-47d5-85b4-6ec2b408203e\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.061644 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-collectd-entrypoint-script\") pod \"da929df5-1e6a-47d5-85b4-6ec2b408203e\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.070074 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da929df5-1e6a-47d5-85b4-6ec2b408203e-kube-api-access-vfprm" (OuterVolumeSpecName: "kube-api-access-vfprm") pod "da929df5-1e6a-47d5-85b4-6ec2b408203e" (UID: "da929df5-1e6a-47d5-85b4-6ec2b408203e"). InnerVolumeSpecName "kube-api-access-vfprm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.079221 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "da929df5-1e6a-47d5-85b4-6ec2b408203e" (UID: "da929df5-1e6a-47d5-85b4-6ec2b408203e"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.079376 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "da929df5-1e6a-47d5-85b4-6ec2b408203e" (UID: "da929df5-1e6a-47d5-85b4-6ec2b408203e"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.081754 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "da929df5-1e6a-47d5-85b4-6ec2b408203e" (UID: "da929df5-1e6a-47d5-85b4-6ec2b408203e"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.083983 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "da929df5-1e6a-47d5-85b4-6ec2b408203e" (UID: "da929df5-1e6a-47d5-85b4-6ec2b408203e"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.087717 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "da929df5-1e6a-47d5-85b4-6ec2b408203e" (UID: "da929df5-1e6a-47d5-85b4-6ec2b408203e"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.091417 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "da929df5-1e6a-47d5-85b4-6ec2b408203e" (UID: "da929df5-1e6a-47d5-85b4-6ec2b408203e"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.164874 4947 reconciler_common.go:293] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.164909 4947 reconciler_common.go:293] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-healthcheck-log\") on node \"crc\" DevicePath \"\"" Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.165005 4947 reconciler_common.go:293] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.165066 4947 reconciler_common.go:293] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-sensubility-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.165094 4947 reconciler_common.go:293] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-collectd-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.165117 4947 reconciler_common.go:293] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.165182 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfprm\" (UniqueName: \"kubernetes.io/projected/da929df5-1e6a-47d5-85b4-6ec2b408203e-kube-api-access-vfprm\") on node \"crc\" DevicePath \"\"" Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.729627 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-hsw2k" event={"ID":"da929df5-1e6a-47d5-85b4-6ec2b408203e","Type":"ContainerDied","Data":"110cf8b2b20c6c29231bf9ef83d8cfeceac4b9679d2965b70318ddee151b410e"} Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.729744 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.729722 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="110cf8b2b20c6c29231bf9ef83d8cfeceac4b9679d2965b70318ddee151b410e" Jan 25 00:37:57 crc kubenswrapper[4947]: I0125 00:37:57.298807 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-7ffht_b6c83c5f-2bee-41a9-8433-391c8e71812b/smoketest-collectd/0.log" Jan 25 00:37:57 crc kubenswrapper[4947]: I0125 00:37:57.663527 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-7ffht_b6c83c5f-2bee-41a9-8433-391c8e71812b/smoketest-ceilometer/0.log" Jan 25 00:37:58 crc kubenswrapper[4947]: I0125 00:37:58.007823 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-68864d46cb-qb5z8_53107577-1f0a-4c8d-b5e5-81e4d415f3a1/default-interconnect/0.log" Jan 25 00:37:58 crc kubenswrapper[4947]: I0125 00:37:58.382009 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf_0d98fa0e-0a1b-4139-b32a-dbc771dc0939/bridge/2.log" Jan 25 00:37:58 crc kubenswrapper[4947]: I0125 00:37:58.668847 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf_0d98fa0e-0a1b-4139-b32a-dbc771dc0939/sg-core/0.log" Jan 25 00:37:58 crc kubenswrapper[4947]: I0125 00:37:58.996594 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-9c5498458-pjx7f_9b3c0215-a9e0-45e1-a844-c93fd70138c9/bridge/2.log" Jan 25 00:37:59 crc kubenswrapper[4947]: I0125 00:37:59.387812 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-9c5498458-pjx7f_9b3c0215-a9e0-45e1-a844-c93fd70138c9/sg-core/0.log" Jan 25 00:37:59 crc kubenswrapper[4947]: I0125 00:37:59.788050 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n_d0837a20-7313-4da0-9df6-1ce849d1f029/bridge/2.log" Jan 25 00:38:00 crc kubenswrapper[4947]: I0125 00:38:00.121553 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n_d0837a20-7313-4da0-9df6-1ce849d1f029/sg-core/0.log" Jan 25 00:38:00 crc kubenswrapper[4947]: I0125 00:38:00.458778 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-899c7f46d-5982c_b022a945-2af3-4275-bc4b-5db0790be691/bridge/2.log" Jan 25 00:38:00 crc kubenswrapper[4947]: I0125 00:38:00.790456 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-899c7f46d-5982c_b022a945-2af3-4275-bc4b-5db0790be691/sg-core/0.log" Jan 25 00:38:01 crc kubenswrapper[4947]: I0125 00:38:01.095294 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:38:01 crc kubenswrapper[4947]: E0125 00:38:01.095624 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:38:01 crc kubenswrapper[4947]: I0125 00:38:01.139489 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm_5ef6c1dd-abb0-4c2f-8aa5-13614c09e445/bridge/2.log" Jan 25 00:38:01 crc kubenswrapper[4947]: I0125 00:38:01.543627 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm_5ef6c1dd-abb0-4c2f-8aa5-13614c09e445/sg-core/0.log" Jan 25 00:38:05 crc kubenswrapper[4947]: I0125 00:38:05.191563 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-859d6d5949-k28x7_043c18a1-e602-4917-b73d-5331da5ee62f/operator/0.log" Jan 25 00:38:05 crc kubenswrapper[4947]: I0125 00:38:05.589167 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_3d82b42a-6236-43af-8190-d28e96b2b933/prometheus/0.log" Jan 25 00:38:05 crc kubenswrapper[4947]: I0125 00:38:05.997313 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6/elasticsearch/0.log" Jan 25 00:38:06 crc kubenswrapper[4947]: I0125 00:38:06.377104 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-4gkbj_b6cfc9d0-598c-4149-8a38-ec02ced8d2b8/prometheus-webhook-snmp/0.log" Jan 25 00:38:06 crc kubenswrapper[4947]: I0125 00:38:06.702620 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_4888dfdb-3780-4d4b-ad3a-4c1238a72464/alertmanager/0.log" Jan 25 00:38:12 crc kubenswrapper[4947]: I0125 00:38:12.090639 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:38:12 crc kubenswrapper[4947]: E0125 00:38:12.091490 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:38:22 crc kubenswrapper[4947]: I0125 00:38:22.594173 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-7b5f5cc44d-grff4_ccea2ce4-d212-4599-a152-5a2d53366128/operator/0.log" Jan 25 00:38:26 crc kubenswrapper[4947]: I0125 00:38:26.089423 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:38:26 crc kubenswrapper[4947]: E0125 00:38:26.089816 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:38:26 crc kubenswrapper[4947]: I0125 00:38:26.205139 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-859d6d5949-k28x7_043c18a1-e602-4917-b73d-5331da5ee62f/operator/0.log" Jan 25 00:38:26 crc kubenswrapper[4947]: I0125 00:38:26.528261 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_1e239640-20ad-42e0-8db4-0ada55b1274c/qdr/0.log" Jan 25 00:38:37 crc kubenswrapper[4947]: I0125 00:38:37.089812 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:38:37 crc kubenswrapper[4947]: E0125 00:38:37.091021 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:38:50 crc kubenswrapper[4947]: I0125 00:38:50.089909 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:38:50 crc kubenswrapper[4947]: E0125 00:38:50.091043 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:38:53 crc kubenswrapper[4947]: I0125 00:38:53.019307 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8b4ht/must-gather-vgn5q"] Jan 25 00:38:53 crc kubenswrapper[4947]: E0125 00:38:53.019819 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da929df5-1e6a-47d5-85b4-6ec2b408203e" containerName="smoketest-ceilometer" Jan 25 00:38:53 crc kubenswrapper[4947]: I0125 00:38:53.019833 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="da929df5-1e6a-47d5-85b4-6ec2b408203e" containerName="smoketest-ceilometer" Jan 25 00:38:53 crc kubenswrapper[4947]: E0125 00:38:53.019841 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da929df5-1e6a-47d5-85b4-6ec2b408203e" containerName="smoketest-collectd" Jan 25 00:38:53 crc kubenswrapper[4947]: I0125 00:38:53.019847 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="da929df5-1e6a-47d5-85b4-6ec2b408203e" containerName="smoketest-collectd" Jan 25 00:38:53 crc kubenswrapper[4947]: I0125 00:38:53.019966 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="da929df5-1e6a-47d5-85b4-6ec2b408203e" containerName="smoketest-ceilometer" Jan 25 00:38:53 crc kubenswrapper[4947]: I0125 00:38:53.019979 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="da929df5-1e6a-47d5-85b4-6ec2b408203e" containerName="smoketest-collectd" Jan 25 00:38:53 crc kubenswrapper[4947]: I0125 00:38:53.020700 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8b4ht/must-gather-vgn5q" Jan 25 00:38:53 crc kubenswrapper[4947]: I0125 00:38:53.022631 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-8b4ht"/"default-dockercfg-mhw8v" Jan 25 00:38:53 crc kubenswrapper[4947]: I0125 00:38:53.022686 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-8b4ht"/"kube-root-ca.crt" Jan 25 00:38:53 crc kubenswrapper[4947]: I0125 00:38:53.023227 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-8b4ht"/"openshift-service-ca.crt" Jan 25 00:38:53 crc kubenswrapper[4947]: I0125 00:38:53.042758 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8b4ht/must-gather-vgn5q"] Jan 25 00:38:53 crc kubenswrapper[4947]: I0125 00:38:53.119858 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwpmj\" (UniqueName: \"kubernetes.io/projected/1ea54a3f-8bf0-481a-ae91-236c89f6e1f8-kube-api-access-fwpmj\") pod \"must-gather-vgn5q\" (UID: \"1ea54a3f-8bf0-481a-ae91-236c89f6e1f8\") " pod="openshift-must-gather-8b4ht/must-gather-vgn5q" Jan 25 00:38:53 crc kubenswrapper[4947]: I0125 00:38:53.119925 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1ea54a3f-8bf0-481a-ae91-236c89f6e1f8-must-gather-output\") pod \"must-gather-vgn5q\" (UID: \"1ea54a3f-8bf0-481a-ae91-236c89f6e1f8\") " pod="openshift-must-gather-8b4ht/must-gather-vgn5q" Jan 25 00:38:53 crc kubenswrapper[4947]: I0125 00:38:53.221436 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwpmj\" (UniqueName: \"kubernetes.io/projected/1ea54a3f-8bf0-481a-ae91-236c89f6e1f8-kube-api-access-fwpmj\") pod \"must-gather-vgn5q\" (UID: \"1ea54a3f-8bf0-481a-ae91-236c89f6e1f8\") " pod="openshift-must-gather-8b4ht/must-gather-vgn5q" Jan 25 00:38:53 crc kubenswrapper[4947]: I0125 00:38:53.221491 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1ea54a3f-8bf0-481a-ae91-236c89f6e1f8-must-gather-output\") pod \"must-gather-vgn5q\" (UID: \"1ea54a3f-8bf0-481a-ae91-236c89f6e1f8\") " pod="openshift-must-gather-8b4ht/must-gather-vgn5q" Jan 25 00:38:53 crc kubenswrapper[4947]: I0125 00:38:53.221995 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1ea54a3f-8bf0-481a-ae91-236c89f6e1f8-must-gather-output\") pod \"must-gather-vgn5q\" (UID: \"1ea54a3f-8bf0-481a-ae91-236c89f6e1f8\") " pod="openshift-must-gather-8b4ht/must-gather-vgn5q" Jan 25 00:38:53 crc kubenswrapper[4947]: I0125 00:38:53.242669 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwpmj\" (UniqueName: \"kubernetes.io/projected/1ea54a3f-8bf0-481a-ae91-236c89f6e1f8-kube-api-access-fwpmj\") pod \"must-gather-vgn5q\" (UID: \"1ea54a3f-8bf0-481a-ae91-236c89f6e1f8\") " pod="openshift-must-gather-8b4ht/must-gather-vgn5q" Jan 25 00:38:53 crc kubenswrapper[4947]: I0125 00:38:53.342831 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8b4ht/must-gather-vgn5q" Jan 25 00:38:53 crc kubenswrapper[4947]: I0125 00:38:53.579907 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8b4ht/must-gather-vgn5q"] Jan 25 00:38:53 crc kubenswrapper[4947]: I0125 00:38:53.592612 4947 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 25 00:38:54 crc kubenswrapper[4947]: I0125 00:38:54.268532 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8b4ht/must-gather-vgn5q" event={"ID":"1ea54a3f-8bf0-481a-ae91-236c89f6e1f8","Type":"ContainerStarted","Data":"6ac178507348be2e0cd69ce208e03c0c37c472f9596314eaf57fe56b2f76bc78"} Jan 25 00:39:01 crc kubenswrapper[4947]: I0125 00:39:01.332970 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8b4ht/must-gather-vgn5q" event={"ID":"1ea54a3f-8bf0-481a-ae91-236c89f6e1f8","Type":"ContainerStarted","Data":"447fa340031959fd477ba648029cbb51815705eb97cfea45ead46c65e9bcdac0"} Jan 25 00:39:01 crc kubenswrapper[4947]: I0125 00:39:01.333594 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8b4ht/must-gather-vgn5q" event={"ID":"1ea54a3f-8bf0-481a-ae91-236c89f6e1f8","Type":"ContainerStarted","Data":"b1c63a19ac62e7c38877eab36ec2d473bacdfd664c8d116753afb4e920152ce2"} Jan 25 00:39:02 crc kubenswrapper[4947]: I0125 00:39:02.090046 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:39:02 crc kubenswrapper[4947]: E0125 00:39:02.090307 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:39:13 crc kubenswrapper[4947]: I0125 00:39:13.089543 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:39:13 crc kubenswrapper[4947]: E0125 00:39:13.090522 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:39:24 crc kubenswrapper[4947]: I0125 00:39:24.090096 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:39:24 crc kubenswrapper[4947]: E0125 00:39:24.092853 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:39:39 crc kubenswrapper[4947]: I0125 00:39:39.090189 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:39:39 crc kubenswrapper[4947]: E0125 00:39:39.091034 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:39:48 crc kubenswrapper[4947]: I0125 00:39:48.442170 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-vwtw5_f56c1338-08c8-47de-b24a-3aaf85e315f8/control-plane-machine-set-operator/0.log" Jan 25 00:39:48 crc kubenswrapper[4947]: I0125 00:39:48.557822 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-h6jgn_b8f2f610-05dc-49ea-882e-634d283b3caa/kube-rbac-proxy/0.log" Jan 25 00:39:48 crc kubenswrapper[4947]: I0125 00:39:48.634941 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-h6jgn_b8f2f610-05dc-49ea-882e-634d283b3caa/machine-api-operator/0.log" Jan 25 00:39:53 crc kubenswrapper[4947]: I0125 00:39:53.089957 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:39:53 crc kubenswrapper[4947]: E0125 00:39:53.090865 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:40:02 crc kubenswrapper[4947]: I0125 00:40:02.077190 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-tgcft_6beb1442-5e99-4164-8077-50d6eb5dbd44/cert-manager-controller/0.log" Jan 25 00:40:02 crc kubenswrapper[4947]: I0125 00:40:02.193681 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-cqxvz_c215f860-08a3-4dbd-b7f2-426286319aa8/cert-manager-cainjector/0.log" Jan 25 00:40:02 crc kubenswrapper[4947]: I0125 00:40:02.279387 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-wqxxr_d860ec8b-2f41-4b81-8868-9b078b55b341/cert-manager-webhook/0.log" Jan 25 00:40:08 crc kubenswrapper[4947]: I0125 00:40:08.089440 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:40:08 crc kubenswrapper[4947]: E0125 00:40:08.091516 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:40:16 crc kubenswrapper[4947]: I0125 00:40:16.673543 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-wjw4s_3e662e75-c8ba-4da8-856f-9fc73a2316aa/prometheus-operator/0.log" Jan 25 00:40:16 crc kubenswrapper[4947]: I0125 00:40:16.853445 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k_ae208ca2-2ac2-4a6a-b88e-127c986f32a5/prometheus-operator-admission-webhook/0.log" Jan 25 00:40:16 crc kubenswrapper[4947]: I0125 00:40:16.890610 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx_a3860bf6-f86b-4206-a225-6fa61372a988/prometheus-operator-admission-webhook/0.log" Jan 25 00:40:17 crc kubenswrapper[4947]: I0125 00:40:17.049579 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-4v5sm_9d3adf01-5529-4edb-9b7f-f3c782156a8d/operator/0.log" Jan 25 00:40:17 crc kubenswrapper[4947]: I0125 00:40:17.081814 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-qz44g_38944919-0d65-4fdd-b2bd-2780f8e77bde/perses-operator/0.log" Jan 25 00:40:21 crc kubenswrapper[4947]: I0125 00:40:21.097743 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:40:22 crc kubenswrapper[4947]: I0125 00:40:22.065868 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerStarted","Data":"7004e8129c6b88f58bbbee7984e53d0d93f2f96afa9dd97b9bbb53a49f0a5277"} Jan 25 00:40:22 crc kubenswrapper[4947]: I0125 00:40:22.082403 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8b4ht/must-gather-vgn5q" podStartSLOduration=83.566024468 podStartE2EDuration="1m30.082384649s" podCreationTimestamp="2026-01-25 00:38:52 +0000 UTC" firstStartedPulling="2026-01-25 00:38:53.592303303 +0000 UTC m=+1772.825293753" lastFinishedPulling="2026-01-25 00:39:00.108663494 +0000 UTC m=+1779.341653934" observedRunningTime="2026-01-25 00:39:01.351048262 +0000 UTC m=+1780.584038712" watchObservedRunningTime="2026-01-25 00:40:22.082384649 +0000 UTC m=+1861.315375089" Jan 25 00:40:33 crc kubenswrapper[4947]: I0125 00:40:33.076001 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf_373809d6-f72c-4eff-afeb-1fa942bb9e22/util/0.log" Jan 25 00:40:33 crc kubenswrapper[4947]: I0125 00:40:33.229410 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf_373809d6-f72c-4eff-afeb-1fa942bb9e22/util/0.log" Jan 25 00:40:33 crc kubenswrapper[4947]: I0125 00:40:33.254949 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf_373809d6-f72c-4eff-afeb-1fa942bb9e22/pull/0.log" Jan 25 00:40:33 crc kubenswrapper[4947]: I0125 00:40:33.299328 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf_373809d6-f72c-4eff-afeb-1fa942bb9e22/pull/0.log" Jan 25 00:40:33 crc kubenswrapper[4947]: I0125 00:40:33.580014 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf_373809d6-f72c-4eff-afeb-1fa942bb9e22/util/0.log" Jan 25 00:40:33 crc kubenswrapper[4947]: I0125 00:40:33.593038 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf_373809d6-f72c-4eff-afeb-1fa942bb9e22/extract/0.log" Jan 25 00:40:33 crc kubenswrapper[4947]: I0125 00:40:33.595754 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf_373809d6-f72c-4eff-afeb-1fa942bb9e22/pull/0.log" Jan 25 00:40:33 crc kubenswrapper[4947]: I0125 00:40:33.727906 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk_b18bd971-05aa-4366-8829-6d2db0f3a1a0/util/0.log" Jan 25 00:40:33 crc kubenswrapper[4947]: I0125 00:40:33.948982 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk_b18bd971-05aa-4366-8829-6d2db0f3a1a0/pull/0.log" Jan 25 00:40:33 crc kubenswrapper[4947]: I0125 00:40:33.949892 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk_b18bd971-05aa-4366-8829-6d2db0f3a1a0/pull/0.log" Jan 25 00:40:33 crc kubenswrapper[4947]: I0125 00:40:33.950321 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk_b18bd971-05aa-4366-8829-6d2db0f3a1a0/util/0.log" Jan 25 00:40:34 crc kubenswrapper[4947]: I0125 00:40:34.151723 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk_b18bd971-05aa-4366-8829-6d2db0f3a1a0/extract/0.log" Jan 25 00:40:34 crc kubenswrapper[4947]: I0125 00:40:34.158819 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk_b18bd971-05aa-4366-8829-6d2db0f3a1a0/util/0.log" Jan 25 00:40:34 crc kubenswrapper[4947]: I0125 00:40:34.165968 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk_b18bd971-05aa-4366-8829-6d2db0f3a1a0/pull/0.log" Jan 25 00:40:34 crc kubenswrapper[4947]: I0125 00:40:34.356910 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm_1cfc506e-cb97-4eb4-a967-d5ea940b5ce9/util/0.log" Jan 25 00:40:34 crc kubenswrapper[4947]: I0125 00:40:34.491897 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm_1cfc506e-cb97-4eb4-a967-d5ea940b5ce9/util/0.log" Jan 25 00:40:34 crc kubenswrapper[4947]: I0125 00:40:34.529923 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm_1cfc506e-cb97-4eb4-a967-d5ea940b5ce9/pull/0.log" Jan 25 00:40:34 crc kubenswrapper[4947]: I0125 00:40:34.532151 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm_1cfc506e-cb97-4eb4-a967-d5ea940b5ce9/pull/0.log" Jan 25 00:40:34 crc kubenswrapper[4947]: I0125 00:40:34.689659 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm_1cfc506e-cb97-4eb4-a967-d5ea940b5ce9/util/0.log" Jan 25 00:40:34 crc kubenswrapper[4947]: I0125 00:40:34.731029 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm_1cfc506e-cb97-4eb4-a967-d5ea940b5ce9/extract/0.log" Jan 25 00:40:34 crc kubenswrapper[4947]: I0125 00:40:34.751825 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm_1cfc506e-cb97-4eb4-a967-d5ea940b5ce9/pull/0.log" Jan 25 00:40:34 crc kubenswrapper[4947]: I0125 00:40:34.863080 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw_e1924f8a-318d-4d3b-ada5-703cf399beed/util/0.log" Jan 25 00:40:35 crc kubenswrapper[4947]: I0125 00:40:35.071824 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw_e1924f8a-318d-4d3b-ada5-703cf399beed/util/0.log" Jan 25 00:40:35 crc kubenswrapper[4947]: I0125 00:40:35.102028 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw_e1924f8a-318d-4d3b-ada5-703cf399beed/pull/0.log" Jan 25 00:40:35 crc kubenswrapper[4947]: I0125 00:40:35.120949 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw_e1924f8a-318d-4d3b-ada5-703cf399beed/pull/0.log" Jan 25 00:40:35 crc kubenswrapper[4947]: I0125 00:40:35.362495 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw_e1924f8a-318d-4d3b-ada5-703cf399beed/util/0.log" Jan 25 00:40:35 crc kubenswrapper[4947]: I0125 00:40:35.397841 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw_e1924f8a-318d-4d3b-ada5-703cf399beed/extract/0.log" Jan 25 00:40:35 crc kubenswrapper[4947]: I0125 00:40:35.400902 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw_e1924f8a-318d-4d3b-ada5-703cf399beed/pull/0.log" Jan 25 00:40:35 crc kubenswrapper[4947]: I0125 00:40:35.553104 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lkvvh_8f150ea3-0af6-4206-9d74-e15f901e571b/extract-utilities/0.log" Jan 25 00:40:35 crc kubenswrapper[4947]: I0125 00:40:35.739460 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lkvvh_8f150ea3-0af6-4206-9d74-e15f901e571b/extract-utilities/0.log" Jan 25 00:40:35 crc kubenswrapper[4947]: I0125 00:40:35.739570 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lkvvh_8f150ea3-0af6-4206-9d74-e15f901e571b/extract-content/0.log" Jan 25 00:40:35 crc kubenswrapper[4947]: I0125 00:40:35.788013 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lkvvh_8f150ea3-0af6-4206-9d74-e15f901e571b/extract-content/0.log" Jan 25 00:40:35 crc kubenswrapper[4947]: I0125 00:40:35.946237 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lkvvh_8f150ea3-0af6-4206-9d74-e15f901e571b/extract-utilities/0.log" Jan 25 00:40:35 crc kubenswrapper[4947]: I0125 00:40:35.952713 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lkvvh_8f150ea3-0af6-4206-9d74-e15f901e571b/extract-content/0.log" Jan 25 00:40:36 crc kubenswrapper[4947]: I0125 00:40:36.155850 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-g7982_5e39c693-6291-4810-863e-fd3e5cd45fbc/extract-utilities/0.log" Jan 25 00:40:36 crc kubenswrapper[4947]: I0125 00:40:36.244753 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lkvvh_8f150ea3-0af6-4206-9d74-e15f901e571b/registry-server/0.log" Jan 25 00:40:36 crc kubenswrapper[4947]: I0125 00:40:36.394249 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-g7982_5e39c693-6291-4810-863e-fd3e5cd45fbc/extract-content/0.log" Jan 25 00:40:36 crc kubenswrapper[4947]: I0125 00:40:36.397834 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-g7982_5e39c693-6291-4810-863e-fd3e5cd45fbc/extract-utilities/0.log" Jan 25 00:40:36 crc kubenswrapper[4947]: I0125 00:40:36.434064 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-g7982_5e39c693-6291-4810-863e-fd3e5cd45fbc/extract-content/0.log" Jan 25 00:40:36 crc kubenswrapper[4947]: I0125 00:40:36.552064 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-g7982_5e39c693-6291-4810-863e-fd3e5cd45fbc/extract-utilities/0.log" Jan 25 00:40:36 crc kubenswrapper[4947]: I0125 00:40:36.683492 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-g7982_5e39c693-6291-4810-863e-fd3e5cd45fbc/extract-content/0.log" Jan 25 00:40:36 crc kubenswrapper[4947]: I0125 00:40:36.747091 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-g7982_5e39c693-6291-4810-863e-fd3e5cd45fbc/registry-server/0.log" Jan 25 00:40:36 crc kubenswrapper[4947]: I0125 00:40:36.763004 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-mbj6z_94a09856-1120-4003-a601-ee3c9121eb51/marketplace-operator/0.log" Jan 25 00:40:36 crc kubenswrapper[4947]: I0125 00:40:36.869580 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m2ddl_e8adfaf1-4e17-430c-970e-1cbf2e58c18a/extract-utilities/0.log" Jan 25 00:40:36 crc kubenswrapper[4947]: I0125 00:40:36.981036 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m2ddl_e8adfaf1-4e17-430c-970e-1cbf2e58c18a/extract-utilities/0.log" Jan 25 00:40:37 crc kubenswrapper[4947]: I0125 00:40:37.024339 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m2ddl_e8adfaf1-4e17-430c-970e-1cbf2e58c18a/extract-content/0.log" Jan 25 00:40:37 crc kubenswrapper[4947]: I0125 00:40:37.046938 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m2ddl_e8adfaf1-4e17-430c-970e-1cbf2e58c18a/extract-content/0.log" Jan 25 00:40:37 crc kubenswrapper[4947]: I0125 00:40:37.217426 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m2ddl_e8adfaf1-4e17-430c-970e-1cbf2e58c18a/extract-content/0.log" Jan 25 00:40:37 crc kubenswrapper[4947]: I0125 00:40:37.307353 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m2ddl_e8adfaf1-4e17-430c-970e-1cbf2e58c18a/extract-utilities/0.log" Jan 25 00:40:37 crc kubenswrapper[4947]: I0125 00:40:37.498607 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m2ddl_e8adfaf1-4e17-430c-970e-1cbf2e58c18a/registry-server/0.log" Jan 25 00:40:50 crc kubenswrapper[4947]: I0125 00:40:50.977277 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx_a3860bf6-f86b-4206-a225-6fa61372a988/prometheus-operator-admission-webhook/0.log" Jan 25 00:40:50 crc kubenswrapper[4947]: I0125 00:40:50.981795 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-wjw4s_3e662e75-c8ba-4da8-856f-9fc73a2316aa/prometheus-operator/0.log" Jan 25 00:40:51 crc kubenswrapper[4947]: I0125 00:40:51.001391 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k_ae208ca2-2ac2-4a6a-b88e-127c986f32a5/prometheus-operator-admission-webhook/0.log" Jan 25 00:40:51 crc kubenswrapper[4947]: I0125 00:40:51.148315 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-4v5sm_9d3adf01-5529-4edb-9b7f-f3c782156a8d/operator/0.log" Jan 25 00:40:51 crc kubenswrapper[4947]: I0125 00:40:51.194534 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-qz44g_38944919-0d65-4fdd-b2bd-2780f8e77bde/perses-operator/0.log" Jan 25 00:41:40 crc kubenswrapper[4947]: I0125 00:41:40.740445 4947 generic.go:334] "Generic (PLEG): container finished" podID="1ea54a3f-8bf0-481a-ae91-236c89f6e1f8" containerID="b1c63a19ac62e7c38877eab36ec2d473bacdfd664c8d116753afb4e920152ce2" exitCode=0 Jan 25 00:41:40 crc kubenswrapper[4947]: I0125 00:41:40.740574 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8b4ht/must-gather-vgn5q" event={"ID":"1ea54a3f-8bf0-481a-ae91-236c89f6e1f8","Type":"ContainerDied","Data":"b1c63a19ac62e7c38877eab36ec2d473bacdfd664c8d116753afb4e920152ce2"} Jan 25 00:41:40 crc kubenswrapper[4947]: I0125 00:41:40.741771 4947 scope.go:117] "RemoveContainer" containerID="b1c63a19ac62e7c38877eab36ec2d473bacdfd664c8d116753afb4e920152ce2" Jan 25 00:41:41 crc kubenswrapper[4947]: I0125 00:41:41.517745 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8b4ht_must-gather-vgn5q_1ea54a3f-8bf0-481a-ae91-236c89f6e1f8/gather/0.log" Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.292793 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8b4ht/must-gather-vgn5q"] Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.293821 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-8b4ht/must-gather-vgn5q" podUID="1ea54a3f-8bf0-481a-ae91-236c89f6e1f8" containerName="copy" containerID="cri-o://447fa340031959fd477ba648029cbb51815705eb97cfea45ead46c65e9bcdac0" gracePeriod=2 Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.301071 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8b4ht/must-gather-vgn5q"] Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.681015 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8b4ht_must-gather-vgn5q_1ea54a3f-8bf0-481a-ae91-236c89f6e1f8/copy/0.log" Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.681954 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8b4ht/must-gather-vgn5q" Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.721329 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwpmj\" (UniqueName: \"kubernetes.io/projected/1ea54a3f-8bf0-481a-ae91-236c89f6e1f8-kube-api-access-fwpmj\") pod \"1ea54a3f-8bf0-481a-ae91-236c89f6e1f8\" (UID: \"1ea54a3f-8bf0-481a-ae91-236c89f6e1f8\") " Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.721728 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1ea54a3f-8bf0-481a-ae91-236c89f6e1f8-must-gather-output\") pod \"1ea54a3f-8bf0-481a-ae91-236c89f6e1f8\" (UID: \"1ea54a3f-8bf0-481a-ae91-236c89f6e1f8\") " Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.727665 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ea54a3f-8bf0-481a-ae91-236c89f6e1f8-kube-api-access-fwpmj" (OuterVolumeSpecName: "kube-api-access-fwpmj") pod "1ea54a3f-8bf0-481a-ae91-236c89f6e1f8" (UID: "1ea54a3f-8bf0-481a-ae91-236c89f6e1f8"). InnerVolumeSpecName "kube-api-access-fwpmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.789077 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ea54a3f-8bf0-481a-ae91-236c89f6e1f8-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "1ea54a3f-8bf0-481a-ae91-236c89f6e1f8" (UID: "1ea54a3f-8bf0-481a-ae91-236c89f6e1f8"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.807046 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8b4ht_must-gather-vgn5q_1ea54a3f-8bf0-481a-ae91-236c89f6e1f8/copy/0.log" Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.807521 4947 generic.go:334] "Generic (PLEG): container finished" podID="1ea54a3f-8bf0-481a-ae91-236c89f6e1f8" containerID="447fa340031959fd477ba648029cbb51815705eb97cfea45ead46c65e9bcdac0" exitCode=143 Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.807573 4947 scope.go:117] "RemoveContainer" containerID="447fa340031959fd477ba648029cbb51815705eb97cfea45ead46c65e9bcdac0" Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.807618 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8b4ht/must-gather-vgn5q" Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.824709 4947 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1ea54a3f-8bf0-481a-ae91-236c89f6e1f8-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.824742 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwpmj\" (UniqueName: \"kubernetes.io/projected/1ea54a3f-8bf0-481a-ae91-236c89f6e1f8-kube-api-access-fwpmj\") on node \"crc\" DevicePath \"\"" Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.830257 4947 scope.go:117] "RemoveContainer" containerID="b1c63a19ac62e7c38877eab36ec2d473bacdfd664c8d116753afb4e920152ce2" Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.868827 4947 scope.go:117] "RemoveContainer" containerID="447fa340031959fd477ba648029cbb51815705eb97cfea45ead46c65e9bcdac0" Jan 25 00:41:48 crc kubenswrapper[4947]: E0125 00:41:48.869270 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"447fa340031959fd477ba648029cbb51815705eb97cfea45ead46c65e9bcdac0\": container with ID starting with 447fa340031959fd477ba648029cbb51815705eb97cfea45ead46c65e9bcdac0 not found: ID does not exist" containerID="447fa340031959fd477ba648029cbb51815705eb97cfea45ead46c65e9bcdac0" Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.869307 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"447fa340031959fd477ba648029cbb51815705eb97cfea45ead46c65e9bcdac0"} err="failed to get container status \"447fa340031959fd477ba648029cbb51815705eb97cfea45ead46c65e9bcdac0\": rpc error: code = NotFound desc = could not find container \"447fa340031959fd477ba648029cbb51815705eb97cfea45ead46c65e9bcdac0\": container with ID starting with 447fa340031959fd477ba648029cbb51815705eb97cfea45ead46c65e9bcdac0 not found: ID does not exist" Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.869329 4947 scope.go:117] "RemoveContainer" containerID="b1c63a19ac62e7c38877eab36ec2d473bacdfd664c8d116753afb4e920152ce2" Jan 25 00:41:48 crc kubenswrapper[4947]: E0125 00:41:48.869715 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1c63a19ac62e7c38877eab36ec2d473bacdfd664c8d116753afb4e920152ce2\": container with ID starting with b1c63a19ac62e7c38877eab36ec2d473bacdfd664c8d116753afb4e920152ce2 not found: ID does not exist" containerID="b1c63a19ac62e7c38877eab36ec2d473bacdfd664c8d116753afb4e920152ce2" Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.869741 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1c63a19ac62e7c38877eab36ec2d473bacdfd664c8d116753afb4e920152ce2"} err="failed to get container status \"b1c63a19ac62e7c38877eab36ec2d473bacdfd664c8d116753afb4e920152ce2\": rpc error: code = NotFound desc = could not find container \"b1c63a19ac62e7c38877eab36ec2d473bacdfd664c8d116753afb4e920152ce2\": container with ID starting with b1c63a19ac62e7c38877eab36ec2d473bacdfd664c8d116753afb4e920152ce2 not found: ID does not exist" Jan 25 00:41:49 crc kubenswrapper[4947]: I0125 00:41:49.097354 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ea54a3f-8bf0-481a-ae91-236c89f6e1f8" path="/var/lib/kubelet/pods/1ea54a3f-8bf0-481a-ae91-236c89f6e1f8/volumes" Jan 25 00:42:47 crc kubenswrapper[4947]: I0125 00:42:47.073623 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:42:47 crc kubenswrapper[4947]: I0125 00:42:47.074146 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:43:00 crc kubenswrapper[4947]: I0125 00:43:00.346020 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-594cn"] Jan 25 00:43:00 crc kubenswrapper[4947]: E0125 00:43:00.347190 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea54a3f-8bf0-481a-ae91-236c89f6e1f8" containerName="copy" Jan 25 00:43:00 crc kubenswrapper[4947]: I0125 00:43:00.347219 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea54a3f-8bf0-481a-ae91-236c89f6e1f8" containerName="copy" Jan 25 00:43:00 crc kubenswrapper[4947]: E0125 00:43:00.347271 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea54a3f-8bf0-481a-ae91-236c89f6e1f8" containerName="gather" Jan 25 00:43:00 crc kubenswrapper[4947]: I0125 00:43:00.347287 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea54a3f-8bf0-481a-ae91-236c89f6e1f8" containerName="gather" Jan 25 00:43:00 crc kubenswrapper[4947]: I0125 00:43:00.347523 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea54a3f-8bf0-481a-ae91-236c89f6e1f8" containerName="gather" Jan 25 00:43:00 crc kubenswrapper[4947]: I0125 00:43:00.347557 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea54a3f-8bf0-481a-ae91-236c89f6e1f8" containerName="copy" Jan 25 00:43:00 crc kubenswrapper[4947]: I0125 00:43:00.349646 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-594cn" Jan 25 00:43:00 crc kubenswrapper[4947]: I0125 00:43:00.381364 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-594cn"] Jan 25 00:43:00 crc kubenswrapper[4947]: I0125 00:43:00.412448 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79992216-2d93-4b69-99c7-1c9ae5f449ec-catalog-content\") pod \"redhat-operators-594cn\" (UID: \"79992216-2d93-4b69-99c7-1c9ae5f449ec\") " pod="openshift-marketplace/redhat-operators-594cn" Jan 25 00:43:00 crc kubenswrapper[4947]: I0125 00:43:00.412571 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79992216-2d93-4b69-99c7-1c9ae5f449ec-utilities\") pod \"redhat-operators-594cn\" (UID: \"79992216-2d93-4b69-99c7-1c9ae5f449ec\") " pod="openshift-marketplace/redhat-operators-594cn" Jan 25 00:43:00 crc kubenswrapper[4947]: I0125 00:43:00.412670 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffkv4\" (UniqueName: \"kubernetes.io/projected/79992216-2d93-4b69-99c7-1c9ae5f449ec-kube-api-access-ffkv4\") pod \"redhat-operators-594cn\" (UID: \"79992216-2d93-4b69-99c7-1c9ae5f449ec\") " pod="openshift-marketplace/redhat-operators-594cn" Jan 25 00:43:00 crc kubenswrapper[4947]: I0125 00:43:00.514624 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79992216-2d93-4b69-99c7-1c9ae5f449ec-catalog-content\") pod \"redhat-operators-594cn\" (UID: \"79992216-2d93-4b69-99c7-1c9ae5f449ec\") " pod="openshift-marketplace/redhat-operators-594cn" Jan 25 00:43:00 crc kubenswrapper[4947]: I0125 00:43:00.515112 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79992216-2d93-4b69-99c7-1c9ae5f449ec-utilities\") pod \"redhat-operators-594cn\" (UID: \"79992216-2d93-4b69-99c7-1c9ae5f449ec\") " pod="openshift-marketplace/redhat-operators-594cn" Jan 25 00:43:00 crc kubenswrapper[4947]: I0125 00:43:00.515263 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffkv4\" (UniqueName: \"kubernetes.io/projected/79992216-2d93-4b69-99c7-1c9ae5f449ec-kube-api-access-ffkv4\") pod \"redhat-operators-594cn\" (UID: \"79992216-2d93-4b69-99c7-1c9ae5f449ec\") " pod="openshift-marketplace/redhat-operators-594cn" Jan 25 00:43:00 crc kubenswrapper[4947]: I0125 00:43:00.515401 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79992216-2d93-4b69-99c7-1c9ae5f449ec-catalog-content\") pod \"redhat-operators-594cn\" (UID: \"79992216-2d93-4b69-99c7-1c9ae5f449ec\") " pod="openshift-marketplace/redhat-operators-594cn" Jan 25 00:43:00 crc kubenswrapper[4947]: I0125 00:43:00.515917 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79992216-2d93-4b69-99c7-1c9ae5f449ec-utilities\") pod \"redhat-operators-594cn\" (UID: \"79992216-2d93-4b69-99c7-1c9ae5f449ec\") " pod="openshift-marketplace/redhat-operators-594cn" Jan 25 00:43:00 crc kubenswrapper[4947]: I0125 00:43:00.536687 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffkv4\" (UniqueName: \"kubernetes.io/projected/79992216-2d93-4b69-99c7-1c9ae5f449ec-kube-api-access-ffkv4\") pod \"redhat-operators-594cn\" (UID: \"79992216-2d93-4b69-99c7-1c9ae5f449ec\") " pod="openshift-marketplace/redhat-operators-594cn" Jan 25 00:43:00 crc kubenswrapper[4947]: I0125 00:43:00.686293 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-594cn" Jan 25 00:43:00 crc kubenswrapper[4947]: I0125 00:43:00.943094 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-594cn"] Jan 25 00:43:01 crc kubenswrapper[4947]: I0125 00:43:01.512610 4947 generic.go:334] "Generic (PLEG): container finished" podID="79992216-2d93-4b69-99c7-1c9ae5f449ec" containerID="2c0324f54649ae756699bc76357a9068099e1bf6c32259095868b87bd7de27ba" exitCode=0 Jan 25 00:43:01 crc kubenswrapper[4947]: I0125 00:43:01.512655 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-594cn" event={"ID":"79992216-2d93-4b69-99c7-1c9ae5f449ec","Type":"ContainerDied","Data":"2c0324f54649ae756699bc76357a9068099e1bf6c32259095868b87bd7de27ba"} Jan 25 00:43:01 crc kubenswrapper[4947]: I0125 00:43:01.512682 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-594cn" event={"ID":"79992216-2d93-4b69-99c7-1c9ae5f449ec","Type":"ContainerStarted","Data":"7919acf83a48215938a002c1ea9840e60c59f2adaa6efa5c698adc20f460037f"} Jan 25 00:43:02 crc kubenswrapper[4947]: I0125 00:43:02.521248 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-594cn" event={"ID":"79992216-2d93-4b69-99c7-1c9ae5f449ec","Type":"ContainerStarted","Data":"fbef36cf88ea2229f80dbf75741e05911afcc1a91b0c0fdd76fe3ab04136f94f"} Jan 25 00:43:03 crc kubenswrapper[4947]: I0125 00:43:03.534046 4947 generic.go:334] "Generic (PLEG): container finished" podID="79992216-2d93-4b69-99c7-1c9ae5f449ec" containerID="fbef36cf88ea2229f80dbf75741e05911afcc1a91b0c0fdd76fe3ab04136f94f" exitCode=0 Jan 25 00:43:03 crc kubenswrapper[4947]: I0125 00:43:03.534154 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-594cn" event={"ID":"79992216-2d93-4b69-99c7-1c9ae5f449ec","Type":"ContainerDied","Data":"fbef36cf88ea2229f80dbf75741e05911afcc1a91b0c0fdd76fe3ab04136f94f"} Jan 25 00:43:04 crc kubenswrapper[4947]: I0125 00:43:04.552709 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-594cn" event={"ID":"79992216-2d93-4b69-99c7-1c9ae5f449ec","Type":"ContainerStarted","Data":"9f4edd4a63622155fc48c592e0ae4ce3f9dabe2e2846d5d248c193db25081efc"} Jan 25 00:43:10 crc kubenswrapper[4947]: I0125 00:43:10.687086 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-594cn" Jan 25 00:43:10 crc kubenswrapper[4947]: I0125 00:43:10.687688 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-594cn" Jan 25 00:43:11 crc kubenswrapper[4947]: I0125 00:43:11.744732 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-594cn" podUID="79992216-2d93-4b69-99c7-1c9ae5f449ec" containerName="registry-server" probeResult="failure" output=< Jan 25 00:43:11 crc kubenswrapper[4947]: timeout: failed to connect service ":50051" within 1s Jan 25 00:43:11 crc kubenswrapper[4947]: > Jan 25 00:43:17 crc kubenswrapper[4947]: I0125 00:43:17.072903 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:43:17 crc kubenswrapper[4947]: I0125 00:43:17.073541 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:43:20 crc kubenswrapper[4947]: I0125 00:43:20.762913 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-594cn" Jan 25 00:43:20 crc kubenswrapper[4947]: I0125 00:43:20.795819 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-594cn" podStartSLOduration=18.300742681 podStartE2EDuration="20.795790005s" podCreationTimestamp="2026-01-25 00:43:00 +0000 UTC" firstStartedPulling="2026-01-25 00:43:01.514348162 +0000 UTC m=+2020.747338592" lastFinishedPulling="2026-01-25 00:43:04.009395436 +0000 UTC m=+2023.242385916" observedRunningTime="2026-01-25 00:43:04.582218234 +0000 UTC m=+2023.815208744" watchObservedRunningTime="2026-01-25 00:43:20.795790005 +0000 UTC m=+2040.028780485" Jan 25 00:43:20 crc kubenswrapper[4947]: I0125 00:43:20.841932 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-594cn" Jan 25 00:43:21 crc kubenswrapper[4947]: I0125 00:43:21.010706 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-594cn"] Jan 25 00:43:22 crc kubenswrapper[4947]: I0125 00:43:22.846322 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-594cn" podUID="79992216-2d93-4b69-99c7-1c9ae5f449ec" containerName="registry-server" containerID="cri-o://9f4edd4a63622155fc48c592e0ae4ce3f9dabe2e2846d5d248c193db25081efc" gracePeriod=2 Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.297740 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-594cn" Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.461722 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79992216-2d93-4b69-99c7-1c9ae5f449ec-catalog-content\") pod \"79992216-2d93-4b69-99c7-1c9ae5f449ec\" (UID: \"79992216-2d93-4b69-99c7-1c9ae5f449ec\") " Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.461936 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79992216-2d93-4b69-99c7-1c9ae5f449ec-utilities\") pod \"79992216-2d93-4b69-99c7-1c9ae5f449ec\" (UID: \"79992216-2d93-4b69-99c7-1c9ae5f449ec\") " Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.463427 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79992216-2d93-4b69-99c7-1c9ae5f449ec-utilities" (OuterVolumeSpecName: "utilities") pod "79992216-2d93-4b69-99c7-1c9ae5f449ec" (UID: "79992216-2d93-4b69-99c7-1c9ae5f449ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.463789 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffkv4\" (UniqueName: \"kubernetes.io/projected/79992216-2d93-4b69-99c7-1c9ae5f449ec-kube-api-access-ffkv4\") pod \"79992216-2d93-4b69-99c7-1c9ae5f449ec\" (UID: \"79992216-2d93-4b69-99c7-1c9ae5f449ec\") " Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.464527 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79992216-2d93-4b69-99c7-1c9ae5f449ec-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.473371 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79992216-2d93-4b69-99c7-1c9ae5f449ec-kube-api-access-ffkv4" (OuterVolumeSpecName: "kube-api-access-ffkv4") pod "79992216-2d93-4b69-99c7-1c9ae5f449ec" (UID: "79992216-2d93-4b69-99c7-1c9ae5f449ec"). InnerVolumeSpecName "kube-api-access-ffkv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.565993 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffkv4\" (UniqueName: \"kubernetes.io/projected/79992216-2d93-4b69-99c7-1c9ae5f449ec-kube-api-access-ffkv4\") on node \"crc\" DevicePath \"\"" Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.654003 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79992216-2d93-4b69-99c7-1c9ae5f449ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "79992216-2d93-4b69-99c7-1c9ae5f449ec" (UID: "79992216-2d93-4b69-99c7-1c9ae5f449ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.666863 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79992216-2d93-4b69-99c7-1c9ae5f449ec-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.858599 4947 generic.go:334] "Generic (PLEG): container finished" podID="79992216-2d93-4b69-99c7-1c9ae5f449ec" containerID="9f4edd4a63622155fc48c592e0ae4ce3f9dabe2e2846d5d248c193db25081efc" exitCode=0 Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.858661 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-594cn" event={"ID":"79992216-2d93-4b69-99c7-1c9ae5f449ec","Type":"ContainerDied","Data":"9f4edd4a63622155fc48c592e0ae4ce3f9dabe2e2846d5d248c193db25081efc"} Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.858680 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-594cn" Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.858699 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-594cn" event={"ID":"79992216-2d93-4b69-99c7-1c9ae5f449ec","Type":"ContainerDied","Data":"7919acf83a48215938a002c1ea9840e60c59f2adaa6efa5c698adc20f460037f"} Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.858727 4947 scope.go:117] "RemoveContainer" containerID="9f4edd4a63622155fc48c592e0ae4ce3f9dabe2e2846d5d248c193db25081efc" Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.894539 4947 scope.go:117] "RemoveContainer" containerID="fbef36cf88ea2229f80dbf75741e05911afcc1a91b0c0fdd76fe3ab04136f94f" Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.895674 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-594cn"] Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.903904 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-594cn"] Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.925831 4947 scope.go:117] "RemoveContainer" containerID="2c0324f54649ae756699bc76357a9068099e1bf6c32259095868b87bd7de27ba" Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.955427 4947 scope.go:117] "RemoveContainer" containerID="9f4edd4a63622155fc48c592e0ae4ce3f9dabe2e2846d5d248c193db25081efc" Jan 25 00:43:23 crc kubenswrapper[4947]: E0125 00:43:23.956019 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f4edd4a63622155fc48c592e0ae4ce3f9dabe2e2846d5d248c193db25081efc\": container with ID starting with 9f4edd4a63622155fc48c592e0ae4ce3f9dabe2e2846d5d248c193db25081efc not found: ID does not exist" containerID="9f4edd4a63622155fc48c592e0ae4ce3f9dabe2e2846d5d248c193db25081efc" Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.956058 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f4edd4a63622155fc48c592e0ae4ce3f9dabe2e2846d5d248c193db25081efc"} err="failed to get container status \"9f4edd4a63622155fc48c592e0ae4ce3f9dabe2e2846d5d248c193db25081efc\": rpc error: code = NotFound desc = could not find container \"9f4edd4a63622155fc48c592e0ae4ce3f9dabe2e2846d5d248c193db25081efc\": container with ID starting with 9f4edd4a63622155fc48c592e0ae4ce3f9dabe2e2846d5d248c193db25081efc not found: ID does not exist" Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.956085 4947 scope.go:117] "RemoveContainer" containerID="fbef36cf88ea2229f80dbf75741e05911afcc1a91b0c0fdd76fe3ab04136f94f" Jan 25 00:43:23 crc kubenswrapper[4947]: E0125 00:43:23.956433 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbef36cf88ea2229f80dbf75741e05911afcc1a91b0c0fdd76fe3ab04136f94f\": container with ID starting with fbef36cf88ea2229f80dbf75741e05911afcc1a91b0c0fdd76fe3ab04136f94f not found: ID does not exist" containerID="fbef36cf88ea2229f80dbf75741e05911afcc1a91b0c0fdd76fe3ab04136f94f" Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.956474 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbef36cf88ea2229f80dbf75741e05911afcc1a91b0c0fdd76fe3ab04136f94f"} err="failed to get container status \"fbef36cf88ea2229f80dbf75741e05911afcc1a91b0c0fdd76fe3ab04136f94f\": rpc error: code = NotFound desc = could not find container \"fbef36cf88ea2229f80dbf75741e05911afcc1a91b0c0fdd76fe3ab04136f94f\": container with ID starting with fbef36cf88ea2229f80dbf75741e05911afcc1a91b0c0fdd76fe3ab04136f94f not found: ID does not exist" Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.956499 4947 scope.go:117] "RemoveContainer" containerID="2c0324f54649ae756699bc76357a9068099e1bf6c32259095868b87bd7de27ba" Jan 25 00:43:23 crc kubenswrapper[4947]: E0125 00:43:23.956827 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c0324f54649ae756699bc76357a9068099e1bf6c32259095868b87bd7de27ba\": container with ID starting with 2c0324f54649ae756699bc76357a9068099e1bf6c32259095868b87bd7de27ba not found: ID does not exist" containerID="2c0324f54649ae756699bc76357a9068099e1bf6c32259095868b87bd7de27ba" Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.956862 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c0324f54649ae756699bc76357a9068099e1bf6c32259095868b87bd7de27ba"} err="failed to get container status \"2c0324f54649ae756699bc76357a9068099e1bf6c32259095868b87bd7de27ba\": rpc error: code = NotFound desc = could not find container \"2c0324f54649ae756699bc76357a9068099e1bf6c32259095868b87bd7de27ba\": container with ID starting with 2c0324f54649ae756699bc76357a9068099e1bf6c32259095868b87bd7de27ba not found: ID does not exist" Jan 25 00:43:25 crc kubenswrapper[4947]: I0125 00:43:25.108003 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79992216-2d93-4b69-99c7-1c9ae5f449ec" path="/var/lib/kubelet/pods/79992216-2d93-4b69-99c7-1c9ae5f449ec/volumes" Jan 25 00:43:32 crc kubenswrapper[4947]: I0125 00:43:32.692042 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2n7hf"] Jan 25 00:43:32 crc kubenswrapper[4947]: E0125 00:43:32.693822 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79992216-2d93-4b69-99c7-1c9ae5f449ec" containerName="extract-content" Jan 25 00:43:32 crc kubenswrapper[4947]: I0125 00:43:32.693852 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="79992216-2d93-4b69-99c7-1c9ae5f449ec" containerName="extract-content" Jan 25 00:43:32 crc kubenswrapper[4947]: E0125 00:43:32.693884 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79992216-2d93-4b69-99c7-1c9ae5f449ec" containerName="extract-utilities" Jan 25 00:43:32 crc kubenswrapper[4947]: I0125 00:43:32.693897 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="79992216-2d93-4b69-99c7-1c9ae5f449ec" containerName="extract-utilities" Jan 25 00:43:32 crc kubenswrapper[4947]: E0125 00:43:32.693932 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79992216-2d93-4b69-99c7-1c9ae5f449ec" containerName="registry-server" Jan 25 00:43:32 crc kubenswrapper[4947]: I0125 00:43:32.693950 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="79992216-2d93-4b69-99c7-1c9ae5f449ec" containerName="registry-server" Jan 25 00:43:32 crc kubenswrapper[4947]: I0125 00:43:32.694231 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="79992216-2d93-4b69-99c7-1c9ae5f449ec" containerName="registry-server" Jan 25 00:43:32 crc kubenswrapper[4947]: I0125 00:43:32.695838 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2n7hf" Jan 25 00:43:32 crc kubenswrapper[4947]: I0125 00:43:32.697096 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2n7hf"] Jan 25 00:43:32 crc kubenswrapper[4947]: I0125 00:43:32.812388 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cb7ba49-d823-4f53-b090-c6bcb63d57fc-catalog-content\") pod \"community-operators-2n7hf\" (UID: \"2cb7ba49-d823-4f53-b090-c6bcb63d57fc\") " pod="openshift-marketplace/community-operators-2n7hf" Jan 25 00:43:32 crc kubenswrapper[4947]: I0125 00:43:32.812442 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cb7ba49-d823-4f53-b090-c6bcb63d57fc-utilities\") pod \"community-operators-2n7hf\" (UID: \"2cb7ba49-d823-4f53-b090-c6bcb63d57fc\") " pod="openshift-marketplace/community-operators-2n7hf" Jan 25 00:43:32 crc kubenswrapper[4947]: I0125 00:43:32.812503 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th82f\" (UniqueName: \"kubernetes.io/projected/2cb7ba49-d823-4f53-b090-c6bcb63d57fc-kube-api-access-th82f\") pod \"community-operators-2n7hf\" (UID: \"2cb7ba49-d823-4f53-b090-c6bcb63d57fc\") " pod="openshift-marketplace/community-operators-2n7hf" Jan 25 00:43:32 crc kubenswrapper[4947]: I0125 00:43:32.913806 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th82f\" (UniqueName: \"kubernetes.io/projected/2cb7ba49-d823-4f53-b090-c6bcb63d57fc-kube-api-access-th82f\") pod \"community-operators-2n7hf\" (UID: \"2cb7ba49-d823-4f53-b090-c6bcb63d57fc\") " pod="openshift-marketplace/community-operators-2n7hf" Jan 25 00:43:32 crc kubenswrapper[4947]: I0125 00:43:32.914065 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cb7ba49-d823-4f53-b090-c6bcb63d57fc-catalog-content\") pod \"community-operators-2n7hf\" (UID: \"2cb7ba49-d823-4f53-b090-c6bcb63d57fc\") " pod="openshift-marketplace/community-operators-2n7hf" Jan 25 00:43:32 crc kubenswrapper[4947]: I0125 00:43:32.914185 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cb7ba49-d823-4f53-b090-c6bcb63d57fc-utilities\") pod \"community-operators-2n7hf\" (UID: \"2cb7ba49-d823-4f53-b090-c6bcb63d57fc\") " pod="openshift-marketplace/community-operators-2n7hf" Jan 25 00:43:32 crc kubenswrapper[4947]: I0125 00:43:32.914634 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cb7ba49-d823-4f53-b090-c6bcb63d57fc-utilities\") pod \"community-operators-2n7hf\" (UID: \"2cb7ba49-d823-4f53-b090-c6bcb63d57fc\") " pod="openshift-marketplace/community-operators-2n7hf" Jan 25 00:43:32 crc kubenswrapper[4947]: I0125 00:43:32.914762 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cb7ba49-d823-4f53-b090-c6bcb63d57fc-catalog-content\") pod \"community-operators-2n7hf\" (UID: \"2cb7ba49-d823-4f53-b090-c6bcb63d57fc\") " pod="openshift-marketplace/community-operators-2n7hf" Jan 25 00:43:32 crc kubenswrapper[4947]: I0125 00:43:32.932686 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th82f\" (UniqueName: \"kubernetes.io/projected/2cb7ba49-d823-4f53-b090-c6bcb63d57fc-kube-api-access-th82f\") pod \"community-operators-2n7hf\" (UID: \"2cb7ba49-d823-4f53-b090-c6bcb63d57fc\") " pod="openshift-marketplace/community-operators-2n7hf" Jan 25 00:43:33 crc kubenswrapper[4947]: I0125 00:43:33.021788 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2n7hf" Jan 25 00:43:33 crc kubenswrapper[4947]: I0125 00:43:33.552802 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2n7hf"] Jan 25 00:43:33 crc kubenswrapper[4947]: W0125 00:43:33.570212 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cb7ba49_d823_4f53_b090_c6bcb63d57fc.slice/crio-c495e3d9497b30a3ea665c2ada21749bfeb4828baad2fe1d515c93807d553230 WatchSource:0}: Error finding container c495e3d9497b30a3ea665c2ada21749bfeb4828baad2fe1d515c93807d553230: Status 404 returned error can't find the container with id c495e3d9497b30a3ea665c2ada21749bfeb4828baad2fe1d515c93807d553230 Jan 25 00:43:33 crc kubenswrapper[4947]: I0125 00:43:33.962082 4947 generic.go:334] "Generic (PLEG): container finished" podID="2cb7ba49-d823-4f53-b090-c6bcb63d57fc" containerID="ce06e6e7a8b9450315d329689672bfe26eafdb550594e52df21e546cba8ec88e" exitCode=0 Jan 25 00:43:33 crc kubenswrapper[4947]: I0125 00:43:33.962165 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2n7hf" event={"ID":"2cb7ba49-d823-4f53-b090-c6bcb63d57fc","Type":"ContainerDied","Data":"ce06e6e7a8b9450315d329689672bfe26eafdb550594e52df21e546cba8ec88e"} Jan 25 00:43:33 crc kubenswrapper[4947]: I0125 00:43:33.962538 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2n7hf" event={"ID":"2cb7ba49-d823-4f53-b090-c6bcb63d57fc","Type":"ContainerStarted","Data":"c495e3d9497b30a3ea665c2ada21749bfeb4828baad2fe1d515c93807d553230"} Jan 25 00:43:34 crc kubenswrapper[4947]: I0125 00:43:34.972273 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2n7hf" event={"ID":"2cb7ba49-d823-4f53-b090-c6bcb63d57fc","Type":"ContainerStarted","Data":"0898891484ecb7265ed7cfc5e280994b414f950f0ed19b2484c3bb95f332415a"} Jan 25 00:43:35 crc kubenswrapper[4947]: I0125 00:43:35.985309 4947 generic.go:334] "Generic (PLEG): container finished" podID="2cb7ba49-d823-4f53-b090-c6bcb63d57fc" containerID="0898891484ecb7265ed7cfc5e280994b414f950f0ed19b2484c3bb95f332415a" exitCode=0 Jan 25 00:43:35 crc kubenswrapper[4947]: I0125 00:43:35.985406 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2n7hf" event={"ID":"2cb7ba49-d823-4f53-b090-c6bcb63d57fc","Type":"ContainerDied","Data":"0898891484ecb7265ed7cfc5e280994b414f950f0ed19b2484c3bb95f332415a"} Jan 25 00:43:36 crc kubenswrapper[4947]: I0125 00:43:36.994662 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2n7hf" event={"ID":"2cb7ba49-d823-4f53-b090-c6bcb63d57fc","Type":"ContainerStarted","Data":"9f06d0ad16185fe626e595b85509c4907607d6f1f3f1aa98675a6d73f0f530b7"} Jan 25 00:43:37 crc kubenswrapper[4947]: I0125 00:43:37.033697 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2n7hf" podStartSLOduration=2.48280829 podStartE2EDuration="5.033672843s" podCreationTimestamp="2026-01-25 00:43:32 +0000 UTC" firstStartedPulling="2026-01-25 00:43:33.9637048 +0000 UTC m=+2053.196695240" lastFinishedPulling="2026-01-25 00:43:36.514569323 +0000 UTC m=+2055.747559793" observedRunningTime="2026-01-25 00:43:37.025492226 +0000 UTC m=+2056.258482686" watchObservedRunningTime="2026-01-25 00:43:37.033672843 +0000 UTC m=+2056.266663313" Jan 25 00:43:43 crc kubenswrapper[4947]: I0125 00:43:43.022505 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2n7hf" Jan 25 00:43:43 crc kubenswrapper[4947]: I0125 00:43:43.023182 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2n7hf" Jan 25 00:43:43 crc kubenswrapper[4947]: I0125 00:43:43.100706 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2n7hf" Jan 25 00:43:43 crc kubenswrapper[4947]: I0125 00:43:43.170877 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2n7hf" Jan 25 00:43:43 crc kubenswrapper[4947]: I0125 00:43:43.344860 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2n7hf"] Jan 25 00:43:45 crc kubenswrapper[4947]: I0125 00:43:45.071727 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2n7hf" podUID="2cb7ba49-d823-4f53-b090-c6bcb63d57fc" containerName="registry-server" containerID="cri-o://9f06d0ad16185fe626e595b85509c4907607d6f1f3f1aa98675a6d73f0f530b7" gracePeriod=2 Jan 25 00:43:46 crc kubenswrapper[4947]: I0125 00:43:46.079867 4947 generic.go:334] "Generic (PLEG): container finished" podID="2cb7ba49-d823-4f53-b090-c6bcb63d57fc" containerID="9f06d0ad16185fe626e595b85509c4907607d6f1f3f1aa98675a6d73f0f530b7" exitCode=0 Jan 25 00:43:46 crc kubenswrapper[4947]: I0125 00:43:46.079953 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2n7hf" event={"ID":"2cb7ba49-d823-4f53-b090-c6bcb63d57fc","Type":"ContainerDied","Data":"9f06d0ad16185fe626e595b85509c4907607d6f1f3f1aa98675a6d73f0f530b7"} Jan 25 00:43:46 crc kubenswrapper[4947]: I0125 00:43:46.080218 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2n7hf" event={"ID":"2cb7ba49-d823-4f53-b090-c6bcb63d57fc","Type":"ContainerDied","Data":"c495e3d9497b30a3ea665c2ada21749bfeb4828baad2fe1d515c93807d553230"} Jan 25 00:43:46 crc kubenswrapper[4947]: I0125 00:43:46.080235 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c495e3d9497b30a3ea665c2ada21749bfeb4828baad2fe1d515c93807d553230" Jan 25 00:43:46 crc kubenswrapper[4947]: I0125 00:43:46.082839 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2n7hf" Jan 25 00:43:46 crc kubenswrapper[4947]: I0125 00:43:46.128276 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cb7ba49-d823-4f53-b090-c6bcb63d57fc-catalog-content\") pod \"2cb7ba49-d823-4f53-b090-c6bcb63d57fc\" (UID: \"2cb7ba49-d823-4f53-b090-c6bcb63d57fc\") " Jan 25 00:43:46 crc kubenswrapper[4947]: I0125 00:43:46.128367 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cb7ba49-d823-4f53-b090-c6bcb63d57fc-utilities\") pod \"2cb7ba49-d823-4f53-b090-c6bcb63d57fc\" (UID: \"2cb7ba49-d823-4f53-b090-c6bcb63d57fc\") " Jan 25 00:43:46 crc kubenswrapper[4947]: I0125 00:43:46.128427 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th82f\" (UniqueName: \"kubernetes.io/projected/2cb7ba49-d823-4f53-b090-c6bcb63d57fc-kube-api-access-th82f\") pod \"2cb7ba49-d823-4f53-b090-c6bcb63d57fc\" (UID: \"2cb7ba49-d823-4f53-b090-c6bcb63d57fc\") " Jan 25 00:43:46 crc kubenswrapper[4947]: I0125 00:43:46.129348 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cb7ba49-d823-4f53-b090-c6bcb63d57fc-utilities" (OuterVolumeSpecName: "utilities") pod "2cb7ba49-d823-4f53-b090-c6bcb63d57fc" (UID: "2cb7ba49-d823-4f53-b090-c6bcb63d57fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:43:46 crc kubenswrapper[4947]: I0125 00:43:46.133761 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cb7ba49-d823-4f53-b090-c6bcb63d57fc-kube-api-access-th82f" (OuterVolumeSpecName: "kube-api-access-th82f") pod "2cb7ba49-d823-4f53-b090-c6bcb63d57fc" (UID: "2cb7ba49-d823-4f53-b090-c6bcb63d57fc"). InnerVolumeSpecName "kube-api-access-th82f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:43:46 crc kubenswrapper[4947]: I0125 00:43:46.208699 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cb7ba49-d823-4f53-b090-c6bcb63d57fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2cb7ba49-d823-4f53-b090-c6bcb63d57fc" (UID: "2cb7ba49-d823-4f53-b090-c6bcb63d57fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:43:46 crc kubenswrapper[4947]: I0125 00:43:46.229735 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th82f\" (UniqueName: \"kubernetes.io/projected/2cb7ba49-d823-4f53-b090-c6bcb63d57fc-kube-api-access-th82f\") on node \"crc\" DevicePath \"\"" Jan 25 00:43:46 crc kubenswrapper[4947]: I0125 00:43:46.229809 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cb7ba49-d823-4f53-b090-c6bcb63d57fc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:43:46 crc kubenswrapper[4947]: I0125 00:43:46.229835 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cb7ba49-d823-4f53-b090-c6bcb63d57fc-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:43:47 crc kubenswrapper[4947]: I0125 00:43:47.072808 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:43:47 crc kubenswrapper[4947]: I0125 00:43:47.073392 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:43:47 crc kubenswrapper[4947]: I0125 00:43:47.073487 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:43:47 crc kubenswrapper[4947]: I0125 00:43:47.074691 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7004e8129c6b88f58bbbee7984e53d0d93f2f96afa9dd97b9bbb53a49f0a5277"} pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 25 00:43:47 crc kubenswrapper[4947]: I0125 00:43:47.074834 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" containerID="cri-o://7004e8129c6b88f58bbbee7984e53d0d93f2f96afa9dd97b9bbb53a49f0a5277" gracePeriod=600 Jan 25 00:43:47 crc kubenswrapper[4947]: I0125 00:43:47.086956 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2n7hf" Jan 25 00:43:47 crc kubenswrapper[4947]: I0125 00:43:47.134199 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2n7hf"] Jan 25 00:43:47 crc kubenswrapper[4947]: I0125 00:43:47.140964 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2n7hf"] Jan 25 00:43:48 crc kubenswrapper[4947]: I0125 00:43:48.106061 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerID="7004e8129c6b88f58bbbee7984e53d0d93f2f96afa9dd97b9bbb53a49f0a5277" exitCode=0 Jan 25 00:43:48 crc kubenswrapper[4947]: I0125 00:43:48.106188 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerDied","Data":"7004e8129c6b88f58bbbee7984e53d0d93f2f96afa9dd97b9bbb53a49f0a5277"} Jan 25 00:43:48 crc kubenswrapper[4947]: I0125 00:43:48.106572 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerStarted","Data":"89af1b195462f08a0ce6aa251460370d11275e9bef877a52b4c4ddedf0d1bce6"} Jan 25 00:43:48 crc kubenswrapper[4947]: I0125 00:43:48.106598 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:43:49 crc kubenswrapper[4947]: I0125 00:43:49.104378 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cb7ba49-d823-4f53-b090-c6bcb63d57fc" path="/var/lib/kubelet/pods/2cb7ba49-d823-4f53-b090-c6bcb63d57fc/volumes"